url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://www.goer.ny.gov/Employee_Resources/VRWS/vrwsschedule.html | code | In Payroll Period column, indicate beginning and ending dates of each pay period covered by the agreement.
For each pay period, indicate all days/time worked (include number of hours worked) and days/time not worked, that is, indicate all pass days and all VR time off. If you plan to use other accruals in conjunction with VR schedule, these days/this time should also be included in the schedule. Use the codes listed below to indicate category of days/time.
Where the schedule repeats each pay period, fill out the schedule (include number of hours worked/not worked) and days off for the first pay period only and indicate "same" for subsequent pay periods.
For partial day absences, indicate number of hours worked/off and code for category of leave (for example, 5.5-W; 2-VR).
|Work/Leave Category Codes|
|VR - VR Leave||AL - Annual Leave|
|W - Day Worked||X - Pass Days| | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886964.22/warc/CC-MAIN-20180117193009-20180117213009-00604.warc.gz | CC-MAIN-2018-05 | 877 | 7 |
https://unix.stackexchange.com/questions/69284/what-is-the-correct-way-of-setting-up-a-squid-proxy-in-front-of-wireless-router | code | We are a small office of 20 users. Our current network is like below. We have a static IP on the Wireless Router's WAN port.
Local network <-> Wireless Router <-> Cable Internet
I want to put in a Linux box that will run Squid and ownCloud, like below. The ISP's static IP will shift to the Linux box, which will connect to the router's WAN port.
Local network <-> Wireless Router <-> Linux box <-> Cable Internet
- Is it possible to configure Squid such that users who connect to the Router do not have to configure any proxy for themselves but (and?) are forced to go through the proxy?
- I want to set up whitelists for certain users. Is it possible to enforce whitelists based on ACLs?
- How will the ACLs work on mobile devices? | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00533.warc.gz | CC-MAIN-2019-39 | 733 | 7 |
https://www.statisticshowto.com/lebesgue-integration-overview-simple-definition/ | code | Lebesgue integrals are a powerful form of integration that can work with the most pathological of functions, including unbounded functions and highly discontinuous functions.
Difference Between Riemann Integration and Lebesgue Integration
Riemann integrals work by subdividing the domain into a number of piecewise constant functions for each sub-interval. Lebesgue integration works by subdividing the range instead. An intuitive example of the difference between the two is given in this analogy by Chapman (2010):
Riemann and Lebesgue add up a group of coins with different face values. Riemann adds up each of the coin’s face values, one by one. Let’s say he picks up a dime, a nickel and then a penny: he’ll count 10 + 5 + 1…. continuing until all of the coins have been added, individually. Then he’ll give his total. In contrast, Lebesgue first sorts the coins into piles of dimes, nickels and pennies. He counts each pile, then sums up the totals from each pile.
When Lebesgue sorts the coins into piles, he’s partitioning the value axis (i.e. the axis with the coin’s numerical values) and taking preimages—sets of function arguments that correspond to a subset in the range. These preimages are the fundamental building blocks of Lebesgue integration.
The basic procedure (Tao, 2010) is:
- Subdivide the function’s range into a finite number of segments.
- Construct a simple function by using a function with values that are the same finitely many numbers.
- Keep on adding points in the range of the original function, taking the limit as you go.
Formal Definition of the Lebesgue Integral
The Lebesgue integral can be formally defined as (Wojas & Krupa, 2017):
- sn : A ↦ ℝ is a nondecreasing sequence of nonnegative simple measurable functions, the limit of which is limn→∞ sn(x) = f(x) for every x ∈ A. (note: A is a Lebesgue measurable subset of ℝ).
Note that for this particular definition, the order of sn is not important.
Chapman, C. (2010). Real Mathematical Analysis (Undergraduate Texts in Mathematics). Springer New York.
Hunter, J. Riemann Integral. Retrieved January 14, 2019 from: https://www.math.ucdavis.edu/~hunter/m125b/ch1.pdf
Kestelman, H. “Lebesgue Integral of a Non-Negative Function” and “Lebesgue Integrals of Functions Which Are Sometimes Negative.” Chs. 5-6 in Modern Theories of Integration, 2nd rev. ed. New York: Dover, pp. 113-160, 1960.
Papoulis, A. Probability, Random Variables, and Stochastic Processes, 2nd ed. New York: McGraw-Hill, p. 141, 1984.
Tao, T. (2010). 245A, Notes 2: The Lebesgue integral. Retrieved January 14, 2020 from: https://terrytao.wordpress.com/2010/09/19/245a-notes-2-the-lebesgue-integral/
Wojas, W. & Krupa, J. Familiarizing Students with Definition of Lebesgue Integral: Examples of Calculation Directly from Its Definition Using Mathematica. Math.Comput.Sci. (2017) 11:363–381 | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00778.warc.gz | CC-MAIN-2023-14 | 2,890 | 19 |
http://discourse.iapct.org/t/pct-as-a-control-system-for-organization/8364 | code | (Gavin Ritz 2008.04.20.10.17NZT)
PCT and VSM are not compatible (they have very different premises) and PCT
in itself has no applicable model to do what you ask. VSM has also no
appropriate methodology to do this even Beer resigned himself to this. His
masterpiece for practical application is Diagnosing the System, and
Designing Freedom, not very good at either diagnosing or designing.
Control in organisation is vested in the roles and it's the inter roles that
are required to be so called "authori-tised" with the appropriate
authorities assigned to that role, specifically the cross functional roles.
I suggest you look at Elliot Jaques Requisite Organization for a glimpse
into seeing how to do this. Buy Jaques books on Executive Leadership and his
book on Requisite Organization and it will tell exactly how to do what you
want to achieve.
Ultimately any complexity type approach forces you to decide what is the
"parts and what is the whole". Parts and wholes are often different
depending on what you are trying to achieve.
Controlling at an organisational level really just involves- "stopping",
getting someone to "do something" with parameters, delaying, deciding,
reporting, persuading, informing and not much else. Take these few
authorities of control and ask the question of your own role and the role of
another, see what you think your authorities are in terms of "your role vs.
her/his role) and then ask the other person the same thing.
You will be shocked to know that they will differ hugely. If you have not
defined the controlling parameters I mentioned above. (Is that not the
source of a massive conflict)? Ask this question, your colleague at the same
of level organisation as you in marketing what does he think he can tell you
to do and what do you think you can tell him and he is compelled to do it?
Almost 99% of organisations have no idea how controlling works and don't
have any such authorities in place to manage controlling. In fact it's the
greatest source of conflict in organisations. Most firms will have financial
authorities and no people authorities.
Thanks for your response. I agree that the operationalisation of all these
concepts is a critical and difficult task. I just bought all your books and
will explore how PCT and VSM can maybe be supplemental.
To what extend can the PCT concept be transferred to an organisational
environment like an airline ? Has the PCT concept explicitly been used to
model the control structure of a company ? It would be helpful to read about
Thanks for any suggestions or links.
[From Bill Powers (2009.04.19.0-712 MDT)]
I am trying to use these concepts including the Viable System Model
for the description of a safety management system so your (and
others of course) comments are very welcome.
Tell me something. Suppose you design a system for managing safety,
and it doesn't work as well as you want. You hire a consultant, and
he reports to you that your system's output actions don't have
enough variety to match the variety of the environment you're trying
to control. What does that tell you about how to make the system work
That would be a very underspecified report and no help at all. I
would ask him on what specific dimensions the system lacks variety.
Then I could think of measures to amplify that specific system
variety, or how to attenuate the environmental variety impacting the
system, to recreate homeostasis.
BP: Fine, but you need to know something about control systems to do
all that successfully. Behind what you call "amplifying variety" is
something much more specific: identifying all the variables that need
to be controlled, and providing some means of affecting each of those
variables. That's all that "requisite variety" means. And you need to
develop some way to monitor the results to see if you're getting the
result you want or something else. That's the rest of the control
system. In fact, to find those variables and figure out how to
control them doesn't require thinking about variety at all, though
there's nothing to stop you from doing so if you wish, after you've
solved all the real problems. As you say, variety is a rather
Just trying to eliminate disturbances won't "recreate homeostasis,"
either. You can't eliminate all disturbances, especially in
airplanes. What you need is an actual homeostatic system or as we
call them here, a control system. And it had better not be just a
homeostatic system; what you need is a RHEOstatic system (as
Mrosovski calls them), better known as a hierachy of control systems
in PCT circles. You wouldn't want an autopilot that could only keep
you, homeostatically, at one heading, speed, and altitude. First you
need to be able to maintain each important variable in a specific
state, and then you need to be able to vary the state in which each
variable is being controlled, so the lower-order control systems can
be put into use by more general, higher-order control systems. The
means of varying the homeostatic state is what we call a reference
signal, and that is how higher-order systems can change what lower
systems are doing without coming into conflict with them. The higher
systems tell the lower ones what state to maintain, and leave the
actual maintaining up to them. It's like the pilot entering the
desired heading, airspeed, and altitude into the autopilot. He
doesn't tell the autopilot how to manipulate the ailerons, elevators,
and throttle -- he just tells it what result to achieve. If he tried
to operate the controls he would be fighting the autopilot. There's a
reason for making the autopilot cut out if the pilot starts using the
controls himself. In living systems, that sort of micromanagement
Every recursion of the system (VSM language) must have req var and
this can be achieved by variety amplification and attenuation. I
guess you are probably more familiar with these concepts than I am.
No, actually I don't use those terms at all. What I do in designing
control systems or models of them could probably be classified in
such abstract terms, but it's not the abstract terms that do the
heavy lifting. I design control systems and make them work, and never
once even think about variety. So far that hasn't proved to be a
I am an airline pilot and the idea of a a crew matching the
environmental variety to maintain essential variables (e.g. speed,
altitude, direction) within limits feels as a useful model.
Perhaps that works for you, but I also recommend seeing how PCT looks
to you as a model of those processes. You're making me wonder if we
don't need yet another book, something like "How to use PCT in the real
At 10:40 PM 4/18/2009 +0200, Arthur Dykstra wrote: | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00701.warc.gz | CC-MAIN-2023-06 | 6,679 | 101 |
http://amcult20.blogspot.com/2008/06/my-internet-habits-exposed.html | code | As I've been recently reading through some education blogs (Coolcatteacher, Practical Theory, Weblogg-ed) I've discovered Wordle. It's a website that creates word clouds from almost anything. For example, copy and paste the Gettysburg Address and the most frequently used words show up larger and bolder. The really cool thing is that you can create tag clouds from your Del.icio.us tags. Here is mine (screen captured using Jing).
I will be comparing my Del.icio.us tag clouds from time to time to see how my tagging (and web browsing habits) change over time.
In what ways is Wordle relevant for the classroom? Can word clouds help students improve reading comprehension? or gain a greater understanding of a historic document or speech? or help students see relationships between two historic documents or speeches?
What other thoughts or ideas do you have? | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.71/warc/CC-MAIN-20170423031202-00186-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 860 | 4 |
http://www.tomshardware.com/forum/248500-29-help-e6600-upgrade | code | Well, your MB does support the CPU, but yes it's not a great fit. The CPU support sheet implies it only supports the CPU at 800 FSB... not sure but I'm guessing that would be 200 Mhz. If your CPU is limited to a 9x multiplier that could be bad.
Any modern MB will work fine. I like the ASUS P45 or P43 line myself.
P5Q Pro is a good choice, crossfire or not. | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946120.31/warc/CC-MAIN-20180423164921-20180423184921-00185.warc.gz | CC-MAIN-2018-17 | 358 | 3 |
https://www.questarter.com/q/how-to-set-a-refresh-rate-in-a-subscribed-calendar-13_8865.html | code | using the ical on the iPhone / iPod Touch / iPad how can I set the refresh rate?
or does it pick the refresh rate that I had set up in the Mac iCal App?
If no refresh rate was given in the Mac iCal App, what is the default refresh rate? 30 minutes?
I can't find any option to set refresh rate's of calendars on my iPhone.
I checked my Google Calendar subscription in iCal and it had refresh of 15 minutes as default value. This value might vary depending on the provider of the calendar.
But AFAIK, calendars on iPhone don't refresh unless synced from iTunes.
In "Settings->Mail, Contacts, Calendars->Fetch New Data" you can setup what to do. Depending on the account you can Fetch or Push updates to your iPhone - or choose to do it manually.
When viewing Calendar, click on Calendars button in top left. Down at bottom left corner you'll see the refresh circular arrow. Click it and it will refresh all subscribed calendars.
As of 2019/iOS 12.3.1, these settings are under
Passwords and Accounts →
Fetch New Data →
Fetch. It looks like fetch settings are applied globally to all fetch accounts - you can't set different fetch options for individual calendars or mail accounts. | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669352.5/warc/CC-MAIN-20191117215823-20191118003823-00057.warc.gz | CC-MAIN-2019-47 | 1,182 | 12 |
https://www.scirra.com/forum/pm039s_t61763 | code | If you have found a bug, or have a suggestion/comment then leave it here
...work!!! they actually notify you of a PM until it's read!
- Posts: 3,252
- Reputation: 16,780
now i am waiting for subscription to work again too! (or work at all in this case :D )
- Posts: 211
- Reputation: 3,777
Glad you like it :) It will work on every page on the site you are logged in on.
- Posts: 4,231
- Reputation: 49,505
Return to Website Issues and Feedback
Users browsing this forum: No registered users and 1 guest | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218199514.53/warc/CC-MAIN-20170322212959-00097-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 503 | 12 |
http://www.inclusiveworld.org/569-2/ | code | Scratch Programming makes programming look easy as well as fun through the creation of storyboards and animation. Scratch was first developed at MIT and is popular across all ages. No knowledge of programming is necessary to learn Scratch. Buddies help support our differently abled learners in the class. In the last year's session, participants had a lot of fun working on a Finding Nemo storyline project - check it out here! In the this year's session, participants had a lot of fun working on a Lion King storyline project - check it out here! | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348526471.98/warc/CC-MAIN-20200607075929-20200607105929-00341.warc.gz | CC-MAIN-2020-24 | 548 | 1 |
https://meta.stackexchange.com/questions/134757/responding-to-new-posts-containing-deprecated-code?noredirect=1 | code | It is not uncommon for new posts to contain deprecated code: whilst there may well be good reasons for it in a legacy project, it's usually discouraged in new projects.
Some users no doubt end up here with such deprecated code having followed outdated tutorials, whilst others may end up following posts on SO that contain deprecated code.
How should one respond to:
new questions containing deprecated code;
new answers that continue with the deprecated code used in the question, even if the answer is otherwise correct; and
new answers that introduce deprecated code not found in the question?
Existing MSO questions touching on this subject include:
Specifically relates to answers that have become deprecated since they were posted; its accepted answer (of posting a new answer) is not applicable in this case.
Slightly more relevant, but again relates to posts that have become deprecated since they were posted. | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00467.warc.gz | CC-MAIN-2020-16 | 918 | 9 |
https://mail.scipy.org/pipermail/scipy-user/2012-October/033522.html | code | [SciPy-User] Orthogonal polynomials on the unit circle
Charles R Harris
Sat Oct 27 09:35:40 CDT 2012
On Fri, Oct 26, 2012 at 7:40 PM, <[email protected]> wrote:
> with link to handbook
> application: goodness of fit for circular data
> Are those available anywhere in python land?
Well, we have the trivial case: ϕ_n(z)=z^n for the uniform measure. That
reduces to the usual exp(2*pi*i*\theta) in angular coordinates when the
weight is normalized. But I think you want more ;-) I don't know of any
collection of such functions for python.
What's the difference between orthogonal polynomials on the unit
> circle and periodic polynomials like Fourier series?
It looks to be the weight. Also, the usual Fourier series include terms in
1/z which allows for real functions. I suspect there is some finagling that
can be done to make things go back and forth, but I am unfamiliar with the
topic. Hmm, Laurent polynomials on the unit circle might be more what you
are looking for, see the reference at http://dlmf.nist.gov/18.33 .
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806353.62/warc/CC-MAIN-20171121113222-20171121133222-00478.warc.gz | CC-MAIN-2017-47 | 1,139 | 21 |
https://www.techradar.com/news/computing-components/processors/how-windows-protects-your-pc-1039892/2 | code | In the sand
One solution is to use a 'sandbox' to quarantine running code so that any attempt to make unauthorised changes to the system can be prevented caught before they are carried out. Some antivirus software (even free versions) now provides the facility to automatically run suspect or unsigned programs in a sandbox.
Perhaps more importantly, web browsers are also beginning to use sandboxes, which is another good reason to abandon that old version of Internet Explorer. The ability of malicious or hijacked sites to silently install code on your computer simply by surfing to them will be greatly reduced.
A sandbox operates a little like a virtual machine, in that it provides the running code with a virtual environment containing everything it needs to believe it's running on real hardware. However, it actually runs in a carefully crafted simulation with severe limits placed on it. Any changes to the actual operating system are never allowed to propagate beyond the sandbox.
The sandbox used in Google's Chrome browser is a good example of the concept in action. Rather than write a complete virtualisation product, the developers used Windows' own security model to help Chrome achieve its renowned speed.
Chrome's sandbox works because malware needs to write to unauthorised parts of RAM or to the hard disk to install itself so that it can run again after a reboot. In Windows, this can only be done using a system call to the kernel's I/O functions, all of which check the privileges of the process calling them. Chrome's sandbox is set up so that write operations never have the correct privileges and therefore fail. Return codes are faked, so the malware believes it's installing itself, but never does.
For developers, Chrome's sandbox is particularly useful because it isn't deeply embedded in the browser. Developers can use it to test their own programs and make sure that they don't try to do anything they shouldn't, or which could be construed as malicious.
Chrome's sandbox is among the most secure. In the first three years since the browser's release, it resisted all attempts at subversion during the prestigious Pwn2Own hacking competition. Held during the annual CanSecWest security conference, the competition has seen IE8 and Firefox hacked wide open. However, Chrome's sandbox may have been breached, if the claims of one French security company are true.
Researchers at VUPEN Security recently issued a security advisory giving details of what it claims is a simple, two-step process for breaking out of Chrome's sandbox and making unauthorised changes to the operating system. The news of Chrome's 'pwning' via its sandbox has been met with concern in the online security community, not least because VUPEN Security has chosen not to share its findings with Google, which would be more usual.
When a security researcher finds an exploitable bug, he or she usually contacts the developer with the details and perhaps a suggested fix. Only when the developer has implemented a fix and issued new code does the researcher exercise their bragging rights by publishing full details of the bug online.
However, in a statement, VUPEN Security says that, "We did not alert Google as we only share our vulnerability research with our Government customers for defensive and offensive security". This stance hints at the commercialisation of so-called 'zero day' exploits – those not reported to the developer so that they can be fixed but instead kept for private exploitation or sale.
At a time when governments are talking openly about their preparations for cyber warfare, exploitable bugs, packaged and ready to use with exploit code, can command serious money. However, several pundits have questioned VUPEN Security's announcement. If, they ask, VUPEN is planning to sell its Chrome exploit to a government customer to use as a weapon, why publicise it and put potential adversaries on their guard?
Take the blue pill
We take the idea of virtualisation for granted as a cheap or free path to creating networks on a single physical computer, but it is far from being a software-only technique. Since the mid 2000s, Intel and AMD have included hardware virtualisation inside their chips.
Both companies aimed to make the creation of virtual machine software easier, but their technologies had an unexpected side effect. When you create and run a virtual computer in a package like Oracle's free VirtualBox, for example, the entire simulation runs under the control of a process known as a hypervisor.
The hypervisor (which is also referred to as a virtual machine manager) makes sure that the entire physical computer is apparently available to the virtual machine. It handles access to everything from the BIOS to the USB ports, and resolves any resource access conflicts with other virtual machines it may be controlling at the same time.
However, not long after AMD and Intel released chips supporting virtualisation, Polish security guru Joanna Rutkowska created an ingenious hacking technique that ensures that people can see anything the chips are doing, including everything you type in.
For this, Rutkowska created a simple hypervisor that tells the processor to run under its control. However, the chip itself and the operating system that's running on it have no way of knowing that it has been flipped into this malicious hypervisor. They simply continue running as if nothing had happened.
Rutkowska called her approach the Blue Pill after the concept of the same name in cult sci-fi movie The Matrix. Running the Blue Pill exploit puts the chip into a simulation of the computer, which is indistinguishable from the real thing. Once inside this simulation, everything the running operating system does is laid bare.
Because the simulation is indistinguishable from the real thing, malware using the same concept (like a rootkit, for example) can be created that, potentially, cannot be detected. Other researchers have pointed out flaws in Rutkowska's approach, but there's no doubting that the sheer convenience of virtualisation may ultimately prove the downfall of current processor protection measures, and lead to ever more ingenious defence mechanisms.
First published in PC Plus Issue 313. Read PC Plus on PC, Mac and iPad
Liked this? Then check out Best antivirus 2011: 10 programs on test
Sign up for TechRadar's free Week in Tech newsletter
Get the best tech stories of the week, plus the most popular news and reviews delivered straight to your inbox. Sign up at http://www.techradar.com/register
Follow us on Twitter * Find us on Facebook * Add us on Google+ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00132.warc.gz | CC-MAIN-2023-14 | 6,621 | 25 |
https://community.splunk.com/t5/Getting-Data-In/How-to-configure-indexing-of-historical-data-from-a-database-to/td-p/141566 | code | What I'm trying to achieve is,
Issues I'm facing
The impact of this behavior is that, I cannot do historical pull as searches will not work with time picker. Search will not display the results because search will not find the data for historical duration say last two weeks, as all the historical data is indexes with pull time which is now.
How do I overcome this issue?
I'm assuming you're using the DB Connect App, right? If that's the case, have a look on a similar question:
It's tailored for MS SQL Server but the idea of configuring the timestamp parsing format is the same for any DB. | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00493.warc.gz | CC-MAIN-2020-40 | 593 | 6 |
https://zadane.pl/zadanie/313974 | code | Write about your travel experiences.
I've/ I've never cycled more than 20 kilometers.
1 I have cycled more than 20 kilometers
2 I have visited another country
3 I have studied another language
4 I have met a person from another country
5 I have never swum in two different seas
6 I have never rode on a horse.
7 I have never eaten Indian food | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00336-ip-10-171-10-70.ec2.internal.warc.gz | CC-MAIN-2017-04 | 342 | 9 |
https://chat.pantsbuild.org/t/9745627/i-m-wondering-if-someone-could-weigh-in-on-best-practices-fo | code | 03/16/2017, 8:51 PM
I'm wondering if someone could weigh in on best practices for organizing java projects. Do you have separate top-level directories for each project and then
underneath those? Or does it tend to work better to have just one
directory and branch out based on package beneath that? | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819971.86/warc/CC-MAIN-20240424205851-20240424235851-00238.warc.gz | CC-MAIN-2024-18 | 298 | 4 |
http://www.bobssigns.com/careers/ | code | SIGN SHOP CO-MANAGER
We are looking for an individual with a strong graphics background to work with their hands in a digital print/vehicle graphics company. You MUST have a desire to be a leader, be well organized and very detail oriented. This is a physically demanding job. GREAT cutting and math skills required.
We are looking for a talented designer who is extremely efficient with the adobe suite, has a great eye for detail, is task oriented, and creat with customers.
Are you detailed oriented and open for a challenge? Are you highly motivated and creative? If so, we’re looking for talented and inspired candidates who are focused on teamwork, task completion, time management, and a passion to learn. If you are are all those things then come join our team. | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532929.54/warc/CC-MAIN-20190421215917-20190422001917-00166.warc.gz | CC-MAIN-2019-18 | 771 | 4 |
https://jalammar.github.io/?ref=txt.cohere.ai | code | Here are eight observations I’ve shared recently on the Cohere blog and videos that go over them.:
Article: AI is Eating The World
Here are eight observations I’ve shared recently on the Cohere blog and videos that go over them.:
Article: AI is Eating The World
Can AI Image generation tools make re-imagined, higher-resolution versions of old video game graphics?
Over the last few days, I used AI image generation to reproduce one of my childhood nightmares. I wrestled with Stable Diffusion, Dall-E and Midjourney to see how these commercial AI generation tools can help retell an old visual story - the intro cinematic to an old video game (Nemesis 2 on the MSX). This post describes the process and my experience in using these models/services to retell a story in higher fidelity graphics.
This fine-looking gentleman is the villain in a video game. Dr. Venom appears in the intro cinematic of Nemesis 2, a 1987 video game. This image, in particular, comes at a dramatic reveal in the cinematic.
Let’s update these graphics with visual generative AI tools and see how they compare and where each succeeds and fails.
Here’s a side-by-side look at the panels from the original cinematic (left column) and the final ones generated by the AI tools (right column):
This figure does not show the final Dr. Venom graphic because I want you to witness it as I had, in the proper context and alongside the appropriate music. You can watch that here:
(V2 Nov 2022: Updated images for more precise description of forward diffusion. A few more images in this version)
AI image generation is the most recent AI capability blowing people’s minds (mine included). The ability to create striking visuals from text descriptions has a magical quality to it and points clearly to a shift in how humans create art. The release of Stable Diffusion is a clear milestone in this development because it made a high-performance model available to the masses (performance in terms of image quality, as well as speed and relatively low resource/memory requirements).
After experimenting with AI image generation, you may start to wonder how it works.
This is a gentle introduction to how Stable Diffusion works.
Stable Diffusion is versatile in that it can be used in a number of different ways. Let’s focus at first on image generation from text only (text2img). The image above shows an example text input and the resulting generated image (The actual complete prompt is here). Aside from text to image, another main way of using it is by making it alter images (so inputs are text + image).
A little less than a year ago, I joined the awesome Cohere team. The company trains massive language models (both GPT-like and BERT-like) and offers them as an API (which also supports finetuning). Its founders include Google Brain alums including co-authors of the original Transformers paper. It’s a fascinating role where I get to help companies and developers put these massive models to work solving real-world problems.
I love that I get to share some of the intuitions developers need to start problem-solving with these models. Even though I’ve been working very closely on pretrained Transformers for the past several years (for this blog and in developing Ecco), I’m enjoying the convenience of problem-solving with managed language models as it frees up the restrictions of model loading/deployment and memory/GPU management.
These are some of the articles I wrote and collaborated on with colleagues over the last few months:
This is a high-level intro to large language models to people who are new to them. It establishes the difference between generative (GPT-like) and representation (BERT-like) models and examples use cases for them.
This is one of the first articles I got to write. It's extracted from a much larger document that I wrote to explore some of the visual language to use in explaining the application of these models.
Massive GPT models open the door for a new way of programming. If you structure the input text in the right way, you can useful (and often fascinating) results for a lot of taasks (e.g. text classification, copy writing, summarization...etc).
This article visually demonstrates four principals to create prompts effectively.
This is a walkthrough of creating a simple summarization system. It links to a jupyter notebook which includes the code to start experimenting with text generation and summarization.
The end of this notebook shows an important idea I want to spend more time on in the future. That of how to rank/filter/select the best from amongst multiple generations.
Semantic search has to be one of the most exciting applications of sentence embedding models. This tutorials implements a "similar questions" functionality using sentence embeddings and a a vector search library.
Finetuning tends to lead to the best results language models can achieve. This article explains the intuitions around finetuning representation/sentence embedding models. I've added a couple more visuals to the Twitter thread.
The research around this area is very interesting. I've highly enjoyed papers like Sentence BERT and Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval
This one is a little bit more technical. It explains the parameters you tweak to adjust a GPT's decoding strategy -- the method with which the system picks output tokens.
This is a walkthrough of one of the most common use cases of embedding models -- text classification. It is similar to A Visual Guide to Using BERT for the First Time, but uses Cohere's API.
Summary: The latest batch of language models can be much smaller yet achieve GPT-3 like performance by being able to query a database or search the web for information. A key indication is that building larger and larger models is not the only way to improve performance.
The last few years saw the rise of Large Language Models (LLMs) – machine learning models that rapidly improve how machines process and generate language. Some of the highlights since 2017 include:
For a while, it seemed like scaling larger and larger models is the main way to improve performance. Recent developments in the field, like DeepMind’s RETRO Transformer and OpenAI’s WebGPT, reverse this trend by showing that smaller generative language models can perform on par with massive models if we augment them with a way to search/query for information.
This article breaks down DeepMind’s RETRO (Retrieval-Enhanced TRansfOrmer) and how it works. The model performs on par with GPT-3 despite being 4% its size (7.5 billion parameters vs. 185 billion for GPT-3 Da Vinci).
RETRO was presented in the paper Improving Language Models by Retrieving from Trillions of Tokens. It continues and builds on a wide variety of retrieval work in the research community. This article explains the model and not what is especially novel about it.
Introducing the Explainable AI Cheat Sheet, your high-level guide to the set of tools and methods that helps humans understand AI/ML models and their predictions.
I introduce the cheat sheet in this brief video:
By visualizing the hidden state between a model's layers, we can get some clues as to the model's "thought process".
Part 2: Continuing the pursuit of making Transformer language models more transparent, this article showcases a collection of visualizations to uncover mechanics of language generation inside a pre-trained language model. These visualizations are all created using Ecco, the open-source package we're releasing
In the first part of this series, Interfaces for Explaining Transformer Language Models, we showcased interactive interfaces for input saliency and neuron activations. In this article, we will focus on the hidden state as it evolves from model layer to the next. By looking at the hidden states produced by every transformer decoder block, we aim to gleam information about how a language model arrived at a specific output token. This method is explored by Voita et al.. Nostalgebraist presents compelling visual treatments showcasing the evolution of token rankings, logit scores, and softmax probabilities for the evolving hidden state through the various layers of the model.
Interfaces for exploring transformer language models by looking at input saliency and neuron activation.
The Transformer architecture has been powering a number of the recent advances in NLP. A breakdown of this architecture is provided here . Pre-trained language models based on the architecture, in both its auto-regressive (models that use their own output as input to next time-steps and that process tokens from left-to-right, like GPT2) and denoising (models trained by corrupting/masking the input and that process tokens bidirectionally, like BERT) variants continue to push the envelope in various tasks in NLP and, more recently, in computer vision. Our understanding of why these models work so well, however, still lags behind these developments.
This exposition series continues the pursuit to interpret and visualize the inner-workings of transformer-based language models. We illustrate how some key interpretability methods apply to transformer-based language models. This article focuses on auto-regressive models, but these methods are applicable to other architectures and tasks as well.
This is the first article in the series. In it, we present explorables and visualizations aiding the intuition of:
The next article addresses Hidden State Evolution across the layers of the model and what it may tell us about each layer's role.
The tech world is abuzz with GPT3 hype. Massive language models (like GPT3) are starting to surprise us with their abilities. While not yet completely reliable for most businesses to put in front of their customers, these models are showing sparks of cleverness that are sure to accelerate the march of automation and the possibilities of intelligent computer systems. Let’s remove the aura of mystery around GPT3 and learn how it’s trained and how it works.
A trained language model generates text.
We can optionally pass it some text as input, which influences its output.
The output is generated from what the model “learned” during its training period where it scanned vast amounts of text.
Check out the first video in my new series introducing the general public to AI and machine learning.
My aim for this series is to help people integrate ML into their world-view away from all the hype and overpromises that plauge the topic.
I had an incredible time organizing and speaking at the AI/machine learning track at QCon London 2020 where I invited and shared the stage with incredible speakers Vincent Warmerdam, Susanne Groothuis, Peter Elger, and Hien Luu.
QCon is a global software conference for software engineers, architects, and team leaders, with over 1,600 attendees in London. All speakers have a software background.
Progress has been rapidly accelerating in machine learning models that process language over the last couple of years. This progress has left the research lab and started powering some of the leading digital products. A great example of this is the recent announcement of how the BERT model is now a major force behind Google Search. Google believes this step (or progress in natural language understanding as applied in search) represents “the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search”.
This post is a simple tutorial for how to use a variant of BERT to classify sentences. This is an example that is basic enough as a first intro, yet advanced enough to showcase some of the key concepts involved.
I had a great time speaking at the MIT Analytics Lab about some of my favorite ideas in natural language processing and their practical applications.
This year, we saw a dazzling application of machine learning. The OpenAI GPT-2 exhibited impressive ability of writing coherent and passionate essays that exceed what we anticipated current language models are able to produce. The GPT-2 wasn’t a particularly novel architecture – it’s architecture is very similar to the decoder-only transformer. The GPT2 was, however, a very large, transformer-based language model trained on a massive dataset. In this post, we’ll look at the architecture that enabled the model to produce its results. We will go into the depths of its self-attention layer. And then we’ll look at applications for the decoder-only transformer beyond language modeling.
My goal here is to also supplement my earlier post, The Illustrated Transformer, with more visuals explaining the inner-workings of transformers, and how they’ve evolved since the original paper. My hope is that this visual language will hopefully make it easier to explain later Transformer-based models as their inner-workings continue to evolve.
The NumPy package is the workhorse of data analysis, machine learning, and scientific computing in the python ecosystem. It vastly simplifies manipulating and crunching vectors and matrices. Some of python’s leading package rely on NumPy as a fundamental piece of their infrastructure (examples include scikit-learn, SciPy, pandas, and tensorflow). Beyond the ability to slice and dice numeric data, mastering numpy will give you an edge when dealing and debugging with advanced usecases in these libraries.
In this post, we’ll look at some of the main ways to use NumPy and how it can represent different types of data (tables, images, text…etc) before we can serve them to machine learning models.
I gave a talk at Qcon London this year. Watch it here:
In this video, I introduced word embeddings and the word2vec algorithm. I then proceeded to discuss how the word2vec algorithm is used to create recommendation engines in companies like Airbnb and Alibaba. I close by glancing at real-world consequences of popular recommendation systems like those of YouTube and Facebook.
My Illustrated Word2vec post used and built on the materials I created for this talk (but didn’t include anything on the recommender application of word2vec). This was my first talk at a technical conference and I spent quite a bit of time preparing for it. In the six weeks prior to the conference I spent about 100 hours working on the presentation and ended up with 200 slides. It was an interesting balancing act of trying to make it introductory but not shallow, suitable for senior engineers and architects yet not necessarily ones who have machine learning experience.
“There is in all things a pattern that is part of our universe. It has symmetry, elegance, and grace - those qualities you find always in that which the true artist captures. You can find it in the turning of the seasons, in the way sand trails along a ridge, in the branch clusters of the creosote bush or the pattern of its leaves.
We try to copy these patterns in our lives and our society, seeking the rhythms, the dances, the forms that comfort. Yet, it is possible to see peril in the finding of ultimate perfection. It is clear that the ultimate pattern contains it own fixity. In such perfection, all things move toward death.” ~ Dune (1965)
I find the concept of embeddings to be one of the most fascinating ideas in machine learning. If you’ve ever used Siri, Google Assistant, Alexa, Google Translate, or even smartphone keyboard with next-word prediction, then chances are you’ve benefitted from this idea that has become central to Natural Language Processing models. There has been quite a development over the last couple of decades in using embeddings for neural models (Recent developments include contextualized word embeddings leading to cutting-edge models like BERT and GPT2).
Word2vec is a method to efficiently create word embeddings and has been around since 2013. But in addition to its utility as a word-embedding method, some of its concepts have been shown to be effective in creating recommendation engines and making sense of sequential data even in commercial, non-language tasks. Companies like Airbnb, Alibaba, Spotify, and Anghami have all benefitted from carving out this brilliant piece of machinery from the world of NLP and using it in production to empower a new breed of recommendation engines.
In this post, we’ll go over the concept of embedding, and the mechanics of generating embeddings with word2vec. But let’s start with an example to get familiar with using vectors to represent things. Did you know that a list of five numbers (a vector) can represent so much about your personality?
Hacker News (98 points, 19 comments), Reddit r/MachineLearning (164 points, 20 comments)
Translations: Chinese (Simplified), French 1, French 2, Japanese, Korean, Persian, Russian, Spanish
2021 Update: I created this brief and highly accessible video intro to BERT
The year 2018 has been an inflection point for machine learning models handling text (or more accurately, Natural Language Processing or NLP for short). Our conceptual understanding of how best to represent words and sentences in a way that best captures underlying meanings and relationships is rapidly evolving. Moreover, the NLP community has been putting forward incredibly powerful components that you can freely download and use in your own models and pipelines (It’s been referred to as NLP’s ImageNet moment, referencing how years ago similar developments accelerated the development of machine learning in Computer Vision tasks).
If you’re planning to learn data analysis, machine learning, or data science tools in python, you’re most likely going to be using the wonderful pandas library. Pandas is an open source library for data manipulation and analysis in python.
One of the easiest ways to think about that, is that you can load tables (and excel files) and then slice and dice them in multiple ways:
Hacker News (65 points, 4 comments), Reddit r/MachineLearning (29 points, 3 comments)
Translations: Arabic, Chinese (Simplified) 1, Chinese (Simplified) 2, French 1, French 2, Italian, Japanese, Korean, Persian, Russian, Spanish 1, Spanish 2, Vietnamese
Watch: MIT’s Deep Learning State of the Art lecture referencing this post
Featured in courses at Stanford, Harvard, MIT, Princeton, CMU and others
In the previous post, we looked at Attention – a ubiquitous method in modern deep learning models. Attention is a concept that helped improve the performance of neural machine translation applications. In this post, we will look at The Transformer – a model that uses attention to boost the speed with which these models can be trained. The Transformer outperforms the Google Neural Machine Translation model in specific tasks. The biggest benefit, however, comes from how The Transformer lends itself to parallelization. It is in fact Google Cloud’s recommendation to use The Transformer as a reference model to use their Cloud TPU offering. So let’s try to break the model apart and look at how it functions.
The Transformer was proposed in the paper Attention is All You Need. A TensorFlow implementation of it is available as a part of the Tensor2Tensor package. Harvard’s NLP group created a guide annotating the paper with PyTorch implementation. In this post, we will attempt to oversimplify things a bit and introduce the concepts one by one to hopefully make it easier to understand to people without in-depth knowledge of the subject matter.
2020 Update: I’ve created a “Narrated Transformer” video which is a gentler approach to the topic:
Let’s begin by looking at the model as a single black box. In a machine translation application, it would take a sentence in one language, and output its translation in another.
May 25th update: New graphics (RNN animation, word embedding graph), color coding, elaborated on the final attention example.
Note: The animations below are videos. Touch or hover on them (if you’re using a mouse) to get play controls so you can pause if needed.
Sequence-to-sequence models are deep learning models that have achieved a lot of success in tasks like machine translation, text summarization, and image captioning. Google Translate started using such a model in production in late 2016. These models are explained in the two pioneering papers (Sutskever et al., 2014, Cho et al., 2014).
I found, however, that understanding the model well enough to implement it requires unraveling a series of concepts that build on top of each other. I thought that a bunch of these ideas would be more accessible if expressed visually. That’s what I aim to do in this post. You’ll need some previous understanding of deep learning to get through this post. I hope it can be a useful companion to reading the papers mentioned above (and the attention papers linked later in the post).
A sequence-to-sequence model is a model that takes a sequence of items (words, letters, features of an images…etc) and outputs another sequence of items. A trained model would work like this:
Things get a lot more interesting once you’re comfortable with the fundamentals and start with Reshaping and Pivot Tables. That guide shows some of the more interesting functions of reshaping data. Below are some visualizations to go along with the Pandas reshaping guide.
In the previous post, we looked at the basic concepts of neural networks. Let us now take another example as an excuse to guide us to explore some of the basic mathematical ideas involved in prediction with neural networks.
Update: Part 2 is now live: A Visual And Interactive Look at Basic Neural Network Math
I’m not a machine learning expert. I’m a software engineer by training and I’ve had little interaction with AI. I had always wanted to delve deeper into machine learning, but never really found my “in”. That’s why when Google open sourced TensorFlow in November 2015, I got super excited and knew it was time to jump in and start the learning journey. Not to sound dramatic, but to me, it actually felt kind of like Prometheus handing down fire to mankind from the Mount Olympus of machine learning. In the back of my head was the idea that the entire field of Big Data and technologies like Hadoop were vastly accelerated when Google researchers released their Map Reduce paper. This time it’s not a paper – it’s the actual software they use internally after years and years of evolution.
So I started learning what I can about the basics of the topic, and saw the need for gentler resources for people with no experience in the field. This is my attempt at that.
Discussion: Reddit r/Android (80 points, 16 comments)
This last reason is the operating reason for this post since we’ll be focusing on Android. If you examine the tensorflow repo on GitHub, you’ll find a little tensorflow/examples/android directory. I’ll try to shed some light on the Android TensorFlow example and some of the things going on under the hood. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476464.74/warc/CC-MAIN-20240304165127-20240304195127-00086.warc.gz | CC-MAIN-2024-10 | 22,978 | 93 |
https://community.adobe.com/t5/premiere-pro-discussions/importing-sequence-just-imports-subclips/td-p/13233623 | code | So I've always used to edit into sequences and then nest those sequences into a final Editing sequence. But today when I try importing a sequence in my timeline, it imports all the subclips of that sequence
And that "global FX mute" thing in the Project panel ... most users never use it, and even for those of use that do ... it gets accidentally toggled, and suddenly none of our effects work ... what the HAY??????
Ah ... oooppsies ... nothing to see here ... sigh. | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00343.warc.gz | CC-MAIN-2022-49 | 468 | 3 |
http://www.linuxforums.org/forum/pclinuxos/156123-solved-problem-blender-2-5-2-a.html | code | Results 1 to 1 of 1
Enjoy an ad free experience by logging in. Not a member yet? Register.
[SOLVED] Problem with Blender 2.5.2
I use the NVIDIA GeForce 2 Pro, or something like that. I sought to it that the module worked and stuff. But I get an error from blender giving me the idea that it might be OpenGL. Here is the error:
Compiled with Python version 2.5.2.
Checking for installed Python... got it!
Xlib: extension "GLX" missing on display ":0.0".
intern/ghost/intern/GHOST_WindowX11.cpp:177: X11 glxChooseVisual() failed for OpenGL, verify working openGL system!
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 18 (X_ChangeProperty)
Resource id in failed request: 0xb68f0570
Serial number of failed request: 21
Current serial number in output stream: 22 | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00005-ip-10-171-10-108.ec2.internal.warc.gz | CC-MAIN-2017-09 | 807 | 13 |
http://www.malvern.com/labeng/products/morphologi/scattergram.htm | code | Morphologi - Scattergram
Are you looking for:
- Easy access to the most valuable information contained in a measurement?
- Clear visualisation of measurement data?
Easy classification saving time in SOP development?
1) Visulization od measurement data - plot scattergrams using any size/shape parameter.
2) Filter on any parameter - group and classify.
3) Apply classifications and filters in order to group or exclude certain values based on any size or shape parameter.
Image analysis software demo View a 5 step demo of the image analysis system measurement procedure
Compare and cluster data The image analysis software allows you to compare and cluster data to find differences or similarities between multiple measurements.
Report designer The image analysis software comes with a full range of quantitative reports and allows operators to customize each report for different parameters.
Data export The image analysis software allows the operator to 'drag and drop' report data into other applications and configure specific data items for export.
Regulatory compliance The image analysis software is consistent with the ISO 13322-1 directive for statistical particle size measurements and can be enhanced to for users needing to achieve complete 21CFR Part 11 compliance. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708144156/warc/CC-MAIN-20130516124224-00067-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 1,279 | 13 |
https://archive.sap.com/discussions/thread/173056 | code | RSA1 Transaction: Work in client 001 only
I have done NW2004s installation for BW 7.0. Proceeding further, I have done a client copy from 000 to 100 which was successful.
Now in CLient 100, which iam supposed to use for BW, I am unable to run the TCODE RSA1.
Can anyone please let me know the way forward ASAP as it is very urgent.
Thanks and Regards,
You need to make client 100 as your default login client. For this change the value of profile paramter login/system_client accordingly in RZ10. Further execute the function module RS_MANDT_UNIQUE_SET in SE37 with value of i_mandt being set to 100 (in general the client where you want to execute this transaction). This is in accordance with OSS note 316923.
Please let us know if this solved the issue for you and <deleted>.
DO NOT ASK FOR POINTS
Edited by: Matt Kangas on Feb 5, 2009 11:11 AM | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202476.48/warc/CC-MAIN-20190321010720-20190321032720-00351.warc.gz | CC-MAIN-2019-13 | 847 | 9 |
http://www.utdmavs.org/emas2016/Keynote_Speakers.html | code | Professor Frank P. M. Dignum from Utrecht University will give the keynote speech for the joint EMAS-COIN session on May 9, 2016 at 14:00.
Implementing Norms -- Why is it so difficult?
Many people have made implementations of norms or normative systems over the years. However, the implementations differ widely and no uniform methodology to implement normative systems seemed to have been developed. Why is it so difficult to implement norms? Can't we just have a Norms module that can be added to a system? I will discuss these issues and also point to some possible ways forward.
Professor Jaime Simão Sichman from Universidade de São Paul is the keynote speaker for the EMAS workshop. His talk is scheduled May 10 at 14:00.
Designing and Programming Multiagent Organizations
In the last years, social and organizational aspects of agency have become a major issue in MAS research. Recent applications of MAS on Web Services, Grid Computing and Ubiquitous Computing enforce the need of using these aspects in order to ensure some social order within these systems. One of the ways to assure such a social order is through the so-called multiagent organizations. Multiagent organizations are of two types: either the organization emerge from the activity of the individual agents or it is designed to facilitate and guide some specific global behavior. In the latter case, systems are characterized by the autonomy of the individual participants that however must be able to collaboratively achieve predetermined global goals, within a globally constrained environment. However, there is still a lack of a comprehensive view of the diverse concepts, models and approaches related to multiagent organizations. Moreover, most designers have doubts about how to put these concepts in practice, i.e., how to design and how to program them. This invited talk aims to give some possible answers to such questions.
|Site last updated April 18, 2016 ( Behnam Torabi )| | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742779.14/warc/CC-MAIN-20181115141220-20181115163220-00368.warc.gz | CC-MAIN-2018-47 | 1,964 | 7 |
https://theexceptioncatcher.com/blog/tag/dns/ | code | Since this is the time of the year that many making resolutions relating to self improvement, here are a few of mine:
- Finally get around to learning GridGain. [I would love to see a well published book, or at least a Kindle eBook to get some headway on this] HazelCast would also be interesting, but GridGain has more of what I’m looking for.
- Finish the massive tomb that is “Groovy In Action”
- Finally understand how network routing works.
- Get more experience with DNS, and DNS tools
- Master NMap [not just learn the basic uses of it, but to really excel with the tool] This would be similar with the reading up on SSH I did last year.
- Get up to conversational level German. [Living outside of German speaking nations makes this incredibly difficult]
- Finally develop some strong time management habits.
- Learn how to use Python [to the point where you can do some cool stuff with it]
- Learn R [rather than haphazardly hack]
- Meet/talk with some of the gurus of airfare scheduling/decoding, and the famous Tom Stuker.
- Learn how to use GraphViz [This is one of the odd ones here, but it’s interesting]
- Get better with Erlang and to find/make real world uses.
- Learn/Create a GUI in Apache Pivot, and a web interface with either Stripes and/or Wicket.
How am I planning to accomplish these things? Having goals, and putting them on my task list.
Currently, I am reading up on Maven, and Groovy. I read a book on GIT, and got some practice w. A review of the strengths and weaknesses may be coming up in a later post on this blog.
To the 1.5 readers left reading this blog: What are your goals for the New Year? Leave the response as a blog post linking back to this post or in the comments below. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510697.51/warc/CC-MAIN-20230930145921-20230930175921-00738.warc.gz | CC-MAIN-2023-40 | 1,722 | 17 |
https://www.geoengineer.org/news/bentley-ags-checkers-diggs-and-the-future-world-of-data-transfer | code | The Bentley Geotechnical Information Management team is taking a leading role in developing OpenSource toolkits for AGS and DIGGS and are retiring the AGS checkers to help support these initiatives.
Bentley has been a big supporter of transferring geotechnical data in a standardized format since the start of our involvement in the UK AGS format committee in 1997 and the US DIGGS committee in 2005. Their hard work and perseverance have recently been recognized, with one team member getting an AGS award in 2020 and a total of 3 AGS Awards this year.
Bentley's work within the committees is not just about “Commas and Quotes” or “Pointy Brackets”. The IT industry has changed a lot since 1997 and Bentley through it's work within the committees is leading the industry forward to take advantage of recent changes.
The rapid rise of OpenSource solutions, using tools like Python, has been accelerating over the last five years.
Bentley is no stranger to the OpenSource community as it launched iModel.js in October 2018. iModel.js is an open-source initiative to improve the accessibility, for both visualization and analytical visibility, of infrastructure digital twins.
Bentley spotted an opportunity in late 2020 to be involved with a project being developed on GitHub by Asitha Senanayake from Fugro. Asitha was working on an open-source library to validate AGS 4.1 files and Bentley linked up with him and Tony Daly from Amageo to create a development library that has been reviewed, tested and ported over to an AGS GitLab open-source repository.
The AGS validator project is now in Beta and provides the industry with a singe source of truth for the validity of AGS 4.0.3, 4.0.4 and 4.1 files. The library provides core validation and can be used within open-source or commercial software, via a command line or a desktop App. The libraries and Desktop App can be downloaded and used free of charge.
To support the adoption of a single AGS validator, Bentley will be withdrawing their gINT and AGS Checkers at the end of the Beta period.
Bentley is also taking a leading role in a similar project for DIGGS.
PyDIGGS is a DIGGS toolkit project being led by Xin Ping from Aardman Geotechnics, part of the Tetra Tech group. The project has started to deliver a set of open-source tools for validation and conversion and bentley is currently working on the testing elements of this project and looking to migrate the work they did with their DIGGS Feedback Tool into this library.
This toolkit will provide a single source of truth for the validity of DIGGS files and help software companies produce and consume DIGGS data.
Bentley is fulfilling the long term goal of having geotechnical data work like HTML becoming a reality.
When you open a browser and view a website, you don’t get an HTML file emailed to you, it is just there on the screen. The magic is happening in the background not via your inbox.
Standardized and validated data is the necessary start of this process as it allows the magic to start.
The recent launch of our OpenGround Development Network documentation has created a lot of interest and some very cool Apps that can stream data from OpenGround into multiple applications. Bentley continues to grow this network of developers with their APIs and Software Development Toolkits (SDKs) and plans to plug these two great OpenSource initiatives into the OpenGround ecosystem.
Looking for more information? Fill in the form and we will contact Bentley for you. Alternatively, you can visit Bentley's website and speak with a Bentley Geotechnical Expert.
In mainstream applications, slope stability analy...
The East African Community (EAC) is helping to in...
The advance payments of over 870 million Canadian...
Shift Support in OpenGround can be an invaluabl...
by Daniel Wai RS3 is well known for its capabilit... | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00431.warc.gz | CC-MAIN-2022-33 | 3,854 | 21 |
https://explore.postman.com/team/box | code | Box provides APIs and SDKs to securely upload, store, view, annotate, search, and comment on nearly any type of file in your apps. Designed for the enterprise.
Box Platform API
Collection for the Box.com APIs
Run in Postman
No templates listed | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00214.warc.gz | CC-MAIN-2020-05 | 243 | 5 |
https://kepiqadymyhado.alphabetnyc.com/improving-the-efficiency-of-semantic-based-search-43936hc.html | code | With the rich toolset offered by incremental learning, all reading, learning, viewing, archiving, and annotation functions can be delegated to SuperMemo. This goes far beyond standard learning and includes personal notes, home videos, lectures available in audio and video formats, YouTube material, family photo-albums, diaries, audio files, scanned paper materials, etc. The oldest, most popular, and the most mature component of incremental learning is incremental reading.
Thresholding[ edit ] The simplest method of image segmentation is called the thresholding method.
This method is based on a clip-level or a threshold value to turn a gray-scale image into a binary image. There is also a balanced histogram thresholding. The key of this method is to select the threshold value or values when multiple-levels are selected.
Recently, methods have been developed for thresholding computed tomography CT images. Data clustering Source image. Note that a common technique to improve performance for large images is to downsample the image, compute the clusters, and then reassign the values to the larger image if necessary.
The K-means algorithm is an iterative technique that is used to partition an image into K clusters. The difference is typically based on pixel colorintensitytextureand location, or a weighted combination of these factors. K can be selected manually, randomlyor by a heuristic.
This algorithm is guaranteed to converge, but it may not return the optimal solution.
The quality of the solution depends on the initial set of clusters and the value of K. The idea is simple: Assuming the object of interest is moving, the difference will be exactly that object.
Improving on this idea, Kenney et al. They use a robot to poke objects in order to generate the motion signal necessary for motion-based segmentation.
Interactive segmentation follows the interactive perception framework proposed by Dov Katz and Oliver Brock . Compression-based methods[ edit ] Compression based methods postulate that the optimal segmentation is the one that minimizes, over all possible segmentations, the coding length of the data.
The method describes each segment by its texture and boundary shape. Each of these components is modeled by a probability distribution function and its coding length is computed as follows: The boundary encoding leverages the fact that regions in natural images tend to have a smooth contour.
This prior is used by Huffman coding to encode the difference chain code of the contours in an image. Thus, the smoother a boundary is, the shorter coding length it attains.
Texture is encoded by lossy compression in a way similar to minimum description length MDL principle, but here the length of the data given the model is approximated by the number of samples times the entropy of the model.
The texture in each region is modeled by a multivariate normal distribution whose entropy has a closed form expression. An interesting property of this model is that the estimated entropy bounds the true entropy of the data from above.
This is because among all distributions with a given mean and covariance, normal distribution has the largest entropy. Thus, the true coding length cannot be more than what the algorithm tries to minimize.
For any given segmentation of an image, this scheme yields the number of bits required to encode that image based on the given segmentation.
Thus, among all possible segmentations of an image, the goal is to find the segmentation which produces the shortest coding length. This can be achieved by a simple agglomerative clustering method. The distortion in the lossy compression determines the coarseness of the segmentation and its optimal value may differ for each image.
This parameter can be estimated heuristically from the contrast of textures in an image.Kaybus Joins with Deloitte to Provide Comprehensive Knowledge Automation Platform and Improve Employee Efficiency.
this paper is how to improve the efficiency of semantic web and how to make use of concept relations and to improve semantic web search. The ontology based information retrieval their work is on how the information’s are retrieved from world wide web but not focus on semantic.
Web Architecture from 50, feet. This document attempts to be a high-level view of the architecture of the World Wide Web. It is not a definitive complete explanation, but it tries to enumerate the architectural decisions which have been made, show how they are related, and give references to more detailed material for those interested.
The incremental learning derives its name from the incremental nature of the learning process. In incremental learning, all facets of knowledge receive a regular treatment, and there is a regular inflow of new knowledge that builds upon the past knowledge.
Smart Farming is a development that emphasizes the use of information and communication technology in the cyber-physical farm management cycle. About the Conference "International Conference on Recent Trends in Engineering & Sciences" invites you to share your research with us.
The selected and registered papers are encouraged by submitting them for Reputed Journal. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00440.warc.gz | CC-MAIN-2020-24 | 5,192 | 22 |
https://linearprogramminghelp.com/what-is-a-linear-programming-model-2/ | code | Linear programming models are also referred to as first-order, linear, or linear programming. This is a model that determines how changes in the variables affect the results of the program. The first-order linear programming model is very similar to the first law of thermodynamics. Here, energy is thought to be a distinct entity independent of any matter that is being changed. Using this model will help programmers make sure that any changes they make to the program will not alter any of the variables they are expecting to change.
There are many uses and applications of the model. Some of the examples are software that must compile and run the same every time a new input is made or a computer program that requires a constant input of data in order to run. These examples serve to prove the point that once the program is created it cannot be changed. The linear model is often used for systems where the information needed is continuous and could take an infinite amount of time to obtain. Another example is the navigation system on an aircraft.
A good example of linear systems is a car’s control unit. The computer in the unit uses a continuously running series of instructions as it adjusts the accelerator and gear ratio to move the car around town. If the car were to suddenly go faster one would have to re-write the program in order to keep the car on the track. The model is linear because it only allows change to occur when one or more variables are changing. This keeps the system very simple.
A good use for the linear programming model is in the aerospace industry. The system needs to be able to run forever without needing to be changed. Also it must be able to plot the best possible course for the vehicle and avoid obstacles. In order to do all this with a great deal of programming must be done and constantly updated. It would be very difficult to come up with a new design if all of the information used in the calculation is not correct.
The next time you hear someone say linear programming, think about what it means. In order for the software to be of any use it must allow the programmer to create a new program or change the existing program. To do so the programmer must first collect the required information and then convert it to a form the computer can understand. There are several ways to do this depending on the programming language being used.
A common question that people have is what happens if the input information changes while the program is being calculated? In a linear programming model if this happens the program will be re-calculated resulting in a different output. In order to deal with this the programmer can set up a new program or make some changes to the existing program. The only way to know if the calculation has been altered is to check the results of the previous calculations. If they are different then the calculations were wrong.
A final question that may be asked is what is the advantage of using a linear programming model. The answer is that it allows the programmer more control over the accuracy of the calculations. It also allows for more flexibility in how to interpret the results. There are more complex models available but most companies prefer to stick with the basic model because they are easier to understand and use. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510516.56/warc/CC-MAIN-20230929122500-20230929152500-00288.warc.gz | CC-MAIN-2023-40 | 3,314 | 7 |
http://prototypejs.org/doc/1.6.0/ | code | Prototype API Documentation
Welcome to the API documentation for Prototype. The left column contains the main sections. When you activate a section, its detailed contents then appears at the top of the column.
We are doing our best to provide you with current, clear, example-rich documentation. The goal here is that you should, when reading it, get the same warm, fuzzy feeling as we do when using Prototype :-)
The API documentation is a reference documentation. If you need to learn how to use Prototype, or to acquire skills in a particular feature area, have a look at the Learn section, which is accessible from the top of all pages (the “Tips and Tutorials” link), and is also linked from the orange bar in all reference pages. Documentation in the Learn section is more narrative and tutorial-style.
Enjoy the docs!
Documentation tools and bundles
- downloadable PDF version by Josh Clark
- downloadable CHM version by Remi
- API Search Bookmarklet: Drag this bookmark to your browser’s toolbar. Type in a method to view its documentation. For example,
Event.observe. Search Prototype API
- Dashboard widget
- Firefox sidebar
For those of you using older versions of Prototype, you can download corresponding PDF versions of the docs below, courtesy of Josh Clark.
Downloadable CHM versions are also available: | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705043997/warc/CC-MAIN-20130516115043-00018-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 1,325 | 14 |
http://the-dark-stalkers-are-dead.obsidianportal.com/characters/dread-pirates-rangor | code | The Dark Stalkers of Faerun
Dread Pirates Rangor
These pirates are based out of the port of Iona.
Ruthless and cut throat, the Dread Pirates Rangor are known in the southern seas as willing to do anything for the right amount of coin. If you’re their bounty, best beware. They are coming for you and it will not be pretty. | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863972.16/warc/CC-MAIN-20180521082806-20180521102806-00502.warc.gz | CC-MAIN-2018-22 | 324 | 4 |
https://github.com/rubygems/rubygems.org/issues/1899 | code | Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
Unable to edit gem online #1899
We had the edit page deprecated for months before we removed it: #1815 . Please use spec metadata to set those links (you get per version links, check metadata milestone for details). | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205163.72/warc/CC-MAIN-20190326115319-20190326141319-00505.warc.gz | CC-MAIN-2019-13 | 373 | 4 |
https://danamartens.tech/2016/03/03/spice-it-up-most-final-prototype-presentation/ | code | So we come at last to the end of our first project for Mobile Media, our food-app for either iOS or Android.
It’s funny, but my biggest reflection doing this – besides always do more user testing (which I learned hard last semester) – was actually on iOS UI elements and navigation.
Although I took the iOS class last semester and learned how to code in Swift, what all the apple UI elements were called, the Human Interface Guidelines etc., it really didn’t occur to me until this semester that I’m basically a wolf in sheep’s clothing (who can’t design his own wolf pelt???)
What I mean is that I have been, am, and unless something drastically changes, going to be an Android user first and foremost, at least as far as my primary screen – my phone- is concerned. This was actually less difficult to deal with last semester, as we never really focused on the integral components that make up good user experience. While I knew how to programmatically create different transitions, it never occurred to me why I should be using them. Everything else was actually built into xCode to make it easier on me.
However throughout this project, I frequently caught myself treating my iOS app – which I wanted to design after doing the backend for iOS and no front end – as an android app, just with different buttons. One place this became clear to me was with the back button. On iOS it is much less frequently employed, and pretty much always in the top left corner. It is really used for table views, UI collection views, and other arrays of information that lead to individual detail pages (master-detail). With Android, on the other hand, it is built into the very operating system itself. You get home, back (and something else depending on hardware). So when I built my interactive prototype I was in hot water a couple of times, because the back button on iOS usually goes back to the last screen, which works in a master-detail application for example. However this treatment doesn’t really work as an effective UX tool the same way. Looking back, I probably should have used less of the back button, and maybe sucked it up and added a bottom nav tab bar or other navigation element to help me not need to use this feature.
Anyway, that isn’t the point of this post, but just found it a really interesting experience, being an Android user who sometimes uses an iPad for programming, I have been embedded with a certain flow and tactile sensation of navigation through my device which is very very different on Apple (I’d argue worse… but people would say same about Android.)
User Feedback on Last Weeks Model
- Some of wording is still unclear, especially whether it is a button or a label. E.G. Review on the Detailview.
- I want to be able to click on the trending and nearby labels right from the main view. You should make them clickable instead.
- The filter page is good, but it isn’t clear to me right now where this goes to and how it works. If you will have subfilters, build views for them to show the relationship, and then how to get from them to the detail view page.
- Unclear how the rate screen processes the info. Link rate button to another page to show that it is posted.
- Link the reviews at the bottom of profileView back to the detailView page. They should be clickable links so that you can use them for navigation.
- Link more of the extra pictures and ratings that you have for stock images to the detailView. It’s more fun to be able to click on anything and have the navigation work.
- I want to be able to see the menu and order the food, where can I do that? Isn’t that one of the points you mentioned is key?
Interactive Feedback Link & Presentation | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130615.94/warc/CC-MAIN-20201001030529-20201001060529-00389.warc.gz | CC-MAIN-2020-40 | 3,720 | 15 |
http://lists.mythtv.org/pipermail/mythtv-users/attachments/20070206/23e7c06f/attachment.htm | code | <br><br><div><span class="gmail_quote">On 2/6/07, <b class="gmail_sendername">Jeffery Swan</b> <<a href="mailto:[email protected]">[email protected]</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br><pre>Does anyone know... can I use LVM over multiple NAS so that it all looks like a big /multimedia drive? I'll have to do some more experimenting when I get <br>another NAS up and running.</pre></div></blockquote>
<div><br>You'll want to read about the network/block-device drivers like NBD, GNBD, iSCSI and ATA over ethernet. These make a daemon running on another machine appear to be a block device on the local machine. These block devices can be joined together with the software RAID or LVM systems to make a larger device you can put a filesystem on. | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528037.92/warc/CC-MAIN-20190722133851-20190722155851-00387.warc.gz | CC-MAIN-2019-30 | 873 | 3 |
https://community.monday.com/t/put-boards-in-groups-and-include-all-bords-within-a-group-in-a-dashboard/7047 | code | I cant seem to figure out how to manage resources tagged in different boards (projects) without manually updating wich boards that are included in a dashboard/overview.
When creating dashboards i must manually include each end every board i want to data from in the dashboard. When creating a new board (project) one should be able to automatically get data from that board included in a dashboard.
One should be able to put multiple boards into groups in order to include all boards within this group in dashboards to get an total overview.
Is there someone out there who already has a solution to this? Would be much appreciated
At this point i feel that using monday.com makes us vulnerable when managing resources. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348526471.98/warc/CC-MAIN-20200607075929-20200607105929-00432.warc.gz | CC-MAIN-2020-24 | 718 | 5 |
https://channel9.msdn.com/coding4fun/blog/Creating-audio-signals-in-Net | code | Creating audio signals in .Net
- How Audio Data is Represented
- Demystifying the WAV Format
- Synthesizing Simple Wave Audio using C#
- Algorithms for Different Sound Waves in C#
Dan goes into depth on how audio is represented, explains how the WAV format works then creates different audio waves like the sawtooth, square and sine wave. He gives the equation then an implementation with c#. With part 3 and 4, Dan provides full source code so you too can rock out with some audio dial tones. | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831933.96/warc/CC-MAIN-20181219090209-20181219112209-00395.warc.gz | CC-MAIN-2018-51 | 493 | 6 |
https://forum.wurmonline.com/index.php?/profile/42144-gnomegates/ | code | I guess I'm confused as to what you expected to happen when you mixed your two dyes. You say that you combined two different QL dyes that shared the same RGB. If you were to dye two items, one with each of the different QL dyes, would they not be different colors? Almost every other item in the game, when the QL of the same items are different and combined, the result is the new combined average of the item, based off of the QL and the amount combined. For example, if I have 50ql iron and combine it with 50ql iron, I should expect to have 50ql iron afterwards, and I do. But if I combine 50ql iron with 25ql iron, the result is shifted to a lower ql, again based on the amount of each QL combined. Why should dye be any different. It is how the game has worked forever. Even now if I mix low QL red dye with high QL red dye, one would never expect to retain the high QL dye. If that were the case it would be much to easy to keep making high QL dye, just keep adding low QL garbage dye to keep up your stocks of high QL dye. | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107863364.0/warc/CC-MAIN-20201019145901-20201019175901-00670.warc.gz | CC-MAIN-2020-45 | 1,030 | 1 |
https://www.freelancer.sg/projects/testing-qa/quality-assurance-analyst/ | code | Looking for someone with experience in Test documentation development according to standards in different types of testing: Functionality, System, Regression, Stress, Smoke, Usability, Performance, Cross browser, Cross platform, Acceptance, Ad hoc, Exploratory and others.
The person or agency should be able to write the test cases, produce proper documentation and reports.
71 freelancers are bidding on average $6/hour for this job
Hi there, I've 9 years of experience as a QA Engineer. I would like to be part of the project. Please review my profile. Looking forward to hearing from you. Thanks,
I have 9 years of project experience in Software Testing BFS vertical with an extensive experience in Automation testing(Selenium), Requirement Analysis, Functional Testing, Quality Reviews and Testing. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347404857.23/warc/CC-MAIN-20200529121120-20200529151120-00311.warc.gz | CC-MAIN-2020-24 | 803 | 5 |
http://www.coderanch.com/t/297664/JDBC/databases/connection-database | code | My servlet connect on my localhost with my database but when the file is transfer over a web server the connection will work for a while and refuse to connect avec a restart a new browser. Please help (I am using servlet and mysql database)
"jeanbkus", "Khurram Shahood(SCJP2)", Your names are not valid. The Java Ranch has thousands of visitors every week, many with surprisingly similar names. To avoid confusion we have a naming convention, described at http://www.javaranch.com/name.jsp. We require names to have at least two words, separated by a space, and strongly recommend that you use your full real name. Please edit your profile and select a new name which meets the requirements. Thanks. Dave
Joined: Aug 05, 2002
Thanks you all for helping me. The connection pool was the solution of my problems was resolved. | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398444974.3/warc/CC-MAIN-20151124205404-00173-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 823 | 4 |
https://community.filemaker.com/thread/115677 | code | The discussion in that thread isn't very clear. It's documenting a very specific situiaton.
a) you have a portal on your layout
b) You have a portal filter defined that reduces the number of related records displayed in the portal
c) you have an OnLayoutExit Script Trigger
Without C), if you put the focus into a specific portal row and perform GTRR, you get a found set of only the recors shown in the filtered portal. If you do not have the focus in the portal row, you get all the related records--what you would see if there were no portal restricting the number of records visible in the portal.
Apparently, (I have no more access to reasons why FileMaker was designed to work the wayt it does than you do.), the OnLayoutExit trigger causes the focus to change from the portal row to the layout and thus the portal filter has no effect on the records that come up in your found set--you get the set of records that would appear in the portal if there were no filter.
My experience has no b) at all; simply put, on a regular unfiltered portal I cannot GTRR from a portal row's context when the OnLayoutExit trigger is in place (even with a script devoid of even a single Script Step).
I can GTRR into a new window, but that breaks my navigation protocol. Of course not having the trigger breaks my navigation options, too.
Shouldn't this be documented somewhere?
Then your situation does not match the thread that you referenced. If there is no portal filter, this is a different issue.
When I test GTRR with an ONlayout exit portal and GTRR, but no portal filter. It works just fine.
IF this is in a script that you are using it may help to post the exact script.
To post a script to the forum:
- You can upload a screen shot of your script by using the Upload an Image controls located just below Post A Answer.
- You can print a script to a PDF, open the PDF and then select and copy the script as text from the opened PDF to your clipboard for pasting here.
- If You have FileMaker Advanced, you can generate a database design report and copy the script as text from there.
- If you paste a text form of the script, you can use the Script Pretty box in the Known Bugs List database to paste a version that is single spaced and indented for a more professional and easier to read format.
Thanks for your continued interest, Phil.
> Then your situation does not match the thread that you referenced.
Not completely, other then the text I quoted in my origial post matches the experience I'm seeing.
> When I test GTRR with an ONlayout exit portal and GTRR, but no portal filter. It works just fine.Then that's hopeful.
> IF this is in a script that you are using it may help to post the exact script.No scripts; the portal row button uses the built-in button GTRR step, and the ScriptTrigger is attached to a script with no steps; completely blank.Attached is a relationship graph that exhibits this behavior.Layout is based on Tester, portal is based on Evaluations.Portal rows contain a foriegn key that matches the related primary key in the Products table.Button says GTRR from Products.With no trigger it matches the one Products record based on _kp_product_id=_kf_products_id_matchWith any trigger the GTRR is evaluated as if I were starting at Testers, showing the many records matching _kp_testers_id=_kf_tester_id_match(depending on which row you are on when you try it might look as if it were successful, as the first record will match correctly)Is there something wrong with my relationships? I'm not an expert with the whole parent/child thing, but I thought it was suppsed to work this way...
In databases, the devil is in the details.
What you have here may match the results but not the context. Therefore it does not apply to this specific aspect of GTRR.
What you described is not what I tested and is even further different from that of the original thread you referenced than I thought. I worked from this relationship:
where a Portal to Child was placed on the Parent layout. A GTRR button in the portal row was set to pull up a set of Child records on a layout based on Child, not a third set of records in a table related to child. In that context--which matches that of the original post, the OnLayoutExit trigger interferes with the portal filter affecting the found set produced, but without the filter, there is no observable difference in the results when you remove the script trigger.
Testing your exact scenario, I get interesting results. OnLayoutExit definitely interferes with the results, but what interesting results! I get a found set corresponding to all the records shown in the portal rather than the one related record of the row where I clicked the button.
I have figured out a work around for this issue:
Write a script.
In that script, use two GTRR steps. GTRR 1 pulls up a found set of records for Evaluations.
GTRR 2 then pulls up the related Products record.
You are welcome to switch over to Report an Issue and describe this issue as a possible bug.
I get a found set corresponding to all the records shown in the portal rather than the one related record of the row where I clicked the button.
Which is the same as you get if you drag the button off the portal and onto the layout; I'm glad I'm not crazy!Thank you for that, works even when the trigger is in place on Evaluations, too.I can't imagine FileMaker doesn't know about this behavior, but I'll see about reporting it.Thanks again for your input! | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104631.25/warc/CC-MAIN-20170818082911-20170818102911-00077.warc.gz | CC-MAIN-2017-34 | 5,462 | 34 |
https://rkrknowledge.com/encryption-v-s-decryption/ | code | In this post, we will know what is Encryption and Decryption and what is difference between Encryption and Decryption.
The words Encryption and Decryption are very popular words. if your data is very sensitive and important and you want your data not to be hacked and not misused, then you can encrypt your data.
If you don’t use it, anyone can hack your data and use it for the wrong things.
So It is very helpful to secure and save your data.
What is Encryption?
Encryption is the method that converts the information into secret code and hides the original information.
Encryption is the process used to protect information or private messages sent between two peoples.
When we send an encrypted message or information its gets convert into machine language.
So that no one can read it in the original language, but the person whom the message has been sent, they have private-key for encryption data so that they can decrypt and read it.
If you don’t know, what is the difference between Public Key and Private Key? then we stated below the key difference of it in tabular form.
Difference between Public key & Private key
|Public key||Private key|
|The public key is used encryption algorithms to convert the message to an unreadable form.||The private key is used decryption algorithms to convert the message to an original form.|
|The Public key is used to encrypt the message, so it is also called an encryption key.||The private key is used to decrypt the message. It is also called a secret key.|
|The public key is Asymmetrical because there are two types of key private and public.||The private key is symmetrical because there is only one key that is called a secret key.|
|The receiver shares the public key with the sender.||The private key does not share with the sender.|
|It is slower than the private key.||Faster than public key.|
|Widely distributed.||Kept secret.|
|Public key encryption is known as Asymmetric.||Private key encryption is known as Symmetric.|
|It takes more time to encrypt and decrypt.||It takes less time to encrypt and decrypt compare to public key.|
|Example: AES-128, AES-192 etc.||Example: RSA, DSA etc.|
What is Decryption?
In simple words, the process of converting the encrypted data back to its original form is called decryption.
The encrypted data is called ciphertext, and the original data is called plain text, and the conversion of cipher text to plain text is called Decryption. It also requires a key so that the data can be decrypted.
Process Of Encryption and Decryption
Difference between Encryption & Decryption
|The original message converts to an unrecognizable message.||It converts a meaningless message into its original message.|
|In this process, the encrypted message encrypted with the public key.||In this process, the encrypted message decrypted with the private key.|
|It converts the plain text into ciphertext.||It is convert the ciphertext into plain text.|
|The nature of conversion is an automatic process.||The nature of conversion is automatic or manual.|
|It occurs at the sender’s end.||It occurs at the receiver’s send.|
|The algorithm is less complex and faster.||The algorithm is more complex and slower.|
|The employee while sending essential documents to his/her managers.||The manager while receiving a few critical documents to their employees.|
Advantages of Encryption
- Encryption makes your data completely secure and safe.
- After encrypting the data to a large extent, even if the data is hacked or stolen, no one can access or read your data.
- By using Encryption, your data is only accessible through a decrypted key. You may need a password that you want to allow.
- It is secure and flexible.
- we can not easily read any encrypted data and message.
Disadvantages of Encryption
- The biggest disadvantage of encrypting is that you cannot access this file unless you don’t have a password or if you forget your password, then accessing the file is almost impossible.
- It is hard to implement.
- It is slower than symmetric algorithms.
- The ciphertext may be larger than plain text. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00532.warc.gz | CC-MAIN-2023-50 | 4,092 | 44 |
https://thecybersecurity.news/general-cyber-security-news/cert-ua-uncovers-new-malware-wave-distributing-oceanmap-masepie-steelhook-28364/ | code | The Laptop or computer Emergency Reaction Crew of Ukraine (CERT-UA) has warned of a new phishing marketing campaign orchestrated by the Russia-joined APT28 group to deploy beforehand undocumented malware these types of as OCEANMAP, MASEPIE, and STEELHOOK to harvest delicate information.
The activity, which was detected by the agency involving December 15 and 25, 2023, targets authorities entities with email messages urging recipients to click on on a backlink to perspective a doc.
Future WEBINAR From Person to ADMIN: Master How Hackers Acquire Full Control
Explore the top secret tactics hackers use to develop into admins, how to detect and block it ahead of it truly is too late. Sign up for our webinar right now.
Be part of Now
MASEPIE is a Python-dependent instrument to down load/add documents and execute commands, with communications with the command-and-command (C2) server having place over an encrypted channel making use of the TCP protocol.
The attacks more pave the way for the deployment of further malware, including a PowerShell script referred to as STEELHOOK that’s able of harvesting web browser information and exporting it to an actor-managed server in Foundation64-encoded format.
Also shipped is a C#-centered backdoor dubbed OCEANMAP that is intended to execute instructions making use of cmd.exe.
“The IMAP protocol is made use of as a regulate channel,” CERT-UA explained, adding persistence is reached by generating a URL file named “VMSearch.url” in the Windows Startup folder.
“Instructions, in Foundation64-encoded kind, are contained in the ‘Drafts’ of the corresponding email directories each and every of the drafts includes the name of the laptop, the name of the user and the model of the OS. The benefits of the commands are saved in the inbox listing.”
The agency even further pointed out that reconnaissance and lateral movement routines are carried out inside of an hour of the original compromise by taking advantage of instruments like Impacket and SMBExec.
The disclosure comes months after IBM X-Drive disclosed APT28’s use of lures relevant to the ongoing Israel-Hamas war to facilitate the supply of a custom backdoor referred to as HeadLace.
In new weeks, the prolific Kremlin-backed hacking team has also been attributed to the exploitation of a now-patched critical security flaw in its Outlook email provider (CVE-2023-23397, CVSS score: 9.8) to achieve unauthorized access to victims’ accounts inside of Trade servers.
Uncovered this posting intriguing? Abide by us on Twitter and LinkedIn to browse far more exclusive material we write-up.
Some pieces of this posting are sourced from: | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473558.16/warc/CC-MAIN-20240221202132-20240221232132-00155.warc.gz | CC-MAIN-2024-10 | 2,665 | 15 |
https://rdrr.io/rforge/XGR/man/xRdWrap.html | code | Function to wrap texts from Rd files
xRdWrap is supposed to wrap texts from Rd files under a given
xRdWrap(path = "./XGR/man", remove.dontrun = FALSE)
a directory containing Rd files
logical to indicate whether to remove the restriction of not running examples. By default, it sets to FALSE without any modefications
This auxiliary function helps create a new package. The orignal Rd files will be replaced with new ones.
# xRdWrap(path="./XGR/man", remove.dontrun=FALSE)
Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker. | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542588.29/warc/CC-MAIN-20161202170902-00227-ip-10-31-129-80.ec2.internal.warc.gz | CC-MAIN-2016-50 | 554 | 8 |
http://search.sys-con.com/node/2514700 | code | |By PR Newswire||
|January 16, 2013 08:00 AM EST||
Google Creative Lab tops Ad Council and Team Detroit to win $1 million in free media
MCLEAN, Va., Jan. 16, 2013 /PRNewswire/ -- USA TODAY announced today the winner of the 2012 USA TODAY Print Advertising Competition. Google Creative Lab took the top prize for most creative original print ad and won $1 million in full-page print advertising in USA TODAY. Ad Council and Team Detroit were named finalists. The announcement was made today by the executive sponsors of the competition – Larry Kramer, president and publisher of USA TODAY and Maryam Banikarim, senior vice president and chief marketing officer, Gannett Co., Inc.
The winning ad submitted by Google Creative Lab featured the Dalai Lama and Desmond Tutu coming together for a live broadcast Hangout on Google+. Two finalists were also named in the competition: The Ad Council for an ad for Save the Children; and Team Detroit and Ohio Art for an ad for nanoblocks with the theme "the smaller the blocks, the better the detail." All three ads can be viewed at www.usatoday.com/printcompetition.
The competition was open to advertising agencies, marketers and non-profit organizations. Ads could be part of a new or existing campaign. Judging was based on three criteria: creativity and originality of advertisement, visual storytelling and clarity of writing. Ads were judged by industry executives including: Chip Kidd, designer/writer, Knopf Doubleday Publishing Group; Sean McLaughlin, creative director, Wieden + Kennedy; Chuck Porter, chairman, Crispin Porter + Bogusky; Tiffany Rolfe, chief content officer, Co Collective; Nik Studzinski, executive creative director, Droga5; and Michael Wolff, media columnist, USA TODAY.
"It takes great creativity to tell a story in a meaningful way and make a consumer want to know more. Print ads offer a unique and powerful canvas to tell an advertiser's story that will get noticed by consumers. We congratulate our winner and finalists on sending us their best creative and proving the type of impact print advertising can have as part of a multimedia campaign," said Larry Kramer.
Maryam Banikarim said, "This competition was an interesting exercise in looking at the role of print in an advertiser's brand story in 2013. Advertising has changed over the years but the objective is still the same – to get consumers to take notice of your brand. Sometimes a simple ad can convey a futuristic idea. We think our judges did a terrific job of determining what really makes a great print advertisement."
USA TODAY is a multi-platform news and information media company. Founded in 1982, USA TODAY's mission is to serve as a forum for better understanding and unity to help make the USA truly one nation. Through its unique visual storytelling, USA TODAY delivers high-quality and engaging content across print, digital, social and video platforms. An innovator of news and information, USA TODAY reflects the pulse of the nation and serves as the host of the American conversation – today, tomorrow and for decades to follow. USA TODAY, the nation's number one newspaper in print circulation with an average of more than 1.6 million daily, and USATODAY.com, an award-winning newspaper website launched in 1995, reach a combined 6.6 million readers daily. USA TODAY is a leader in mobile applications with more than 16 million downloads on mobile devices. USA TODAY is owned by Gannett Co., Inc. (NYSE: GCI).
SOURCE USA TODAY
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Nov. 25, 2014 06:00 PM EST Reads: 1,255
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Nov. 25, 2014 06:00 PM EST Reads: 686
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Nov. 25, 2014 06:00 PM EST Reads: 724
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Nov. 25, 2014 05:30 PM EST Reads: 1,034
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
Nov. 25, 2014 04:30 PM EST Reads: 1,130
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Nov. 24, 2014 07:00 PM EST Reads: 1,550
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
Nov. 24, 2014 12:00 PM EST Reads: 1,432
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Nov. 24, 2014 11:00 AM EST Reads: 1,559
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
Nov. 24, 2014 09:00 AM EST Reads: 1,577
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
Nov. 23, 2014 07:30 PM EST Reads: 1,764
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 23, 2014 12:00 PM EST Reads: 1,707
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
Nov. 23, 2014 07:45 AM EST Reads: 1,714
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
Nov. 22, 2014 05:30 PM EST Reads: 1,457
ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ -- IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...
Nov. 22, 2014 05:30 PM EST Reads: 1,530
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ...
Nov. 21, 2014 08:00 PM EST Reads: 1,534
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
Nov. 21, 2014 08:00 PM EST Reads: 1,458
"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 21, 2014 07:00 PM EST Reads: 1,385
Focused on this fast-growing market’s needs, Vitesse Semiconductor Corporation (Nasdaq: VTSS), a leading provider of IC solutions to advance "Ethernet Everywhere" in Carrier, Enterprise and Internet of Things (IoT) networks, introduced its IStaX™ software (VSC6815SDK), a robust protocol stack to simplify deployment and management of Industrial-IoT network applications such as Industrial Ethernet switching, surveillance, video distribution, LCD signage, intelligent sensors, and metering equipment. Leveraging technologies proven in the Carrier and Enterprise markets, IStaX is designed to work ac...
Nov. 20, 2014 09:15 PM EST Reads: 1,436
C-Labs LLC, a leading provider of remote and mobile access for the Internet of Things (IoT), announced the appointment of John Traynor to the position of chief operating officer. Previously a strategic advisor to the firm, Mr. Traynor will now oversee sales, marketing, finance, and operations. Mr. Traynor is based out of the C-Labs office in Redmond, Washington. He reports to Chris Muench, Chief Executive Officer. Mr. Traynor brings valuable business leadership and technology industry expertise to C-Labs. With over 30 years' experience in the high-tech sector, John Traynor has held numerous...
Nov. 20, 2014 06:00 PM EST Reads: 1,384
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.
Nov. 20, 2014 01:00 PM EST Reads: 1,659 | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004246.54/warc/CC-MAIN-20141125155644-00215-ip-10-235-23-156.ec2.internal.warc.gz | CC-MAIN-2014-49 | 14,913 | 50 |
https://or.meta.stackexchange.com/questions/357/why-was-this-close-vote-invalidated-for-a-question-with-multiple-questions/358#358 | code | I am the other reviewer, the sole reviewer with whom a moderator agreed. I have done 1000's of Close Reviews on our main meta site. My reason is essentially the same as prubin's answer.
Instead of viewing the questions as seperate, "and" the questions into a combined set of conditions, to the extent possible, leaving one question and perhaps one or two very closely related questions that don't add significant length to the answer (singular answer):
"I have previously used MOSEK for all my SDP needs. ... SCS has been recommended for very large instances due to lower memory requirements.
That's led me to a series of questions:
- Is there some rules of thumb for choosing appropriate solvers depending on problem instances, or is my best chance just trying out all of them?
- What are the differences between MOSEK, SCS and other SDP solvers, and what is the trade-off of one w.r.t. the other? (i.e., is SCS generally slower? Does MOSEK generally consumes more memory? Or is it all dependent on the specific instance one tries to solve?)
- When dealing with dense, large SDPs, what would be my best bets, solver-wise?
- Does having access to a sizeable cluster helps in any way? For instance, are there solvers that can exploit distributed memory? Or is my best choice going for a single node with a large memory?".
I'm not going to dissect the above and place it into two piles, possibly a third very small pile; just accept that I was able to do so.
Having all the questions together, answered with an answer conditional on all of them, is more efficient than trying to say "for this question please only answer the first question, to answer the other questions go here ..." - following by an index of links for each answer.
These answers are not a precedent to ask as many questions in one question slot as one wishes; this is reviewed on a case-by-case basis. Usually, multiple questions will be denied.
You were not wrong to ask for this question to be reviewed, we did that and thank you for your effort. The result is our decision, not so much that you are "wrong", just that we will let the question stand as-is. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297290384.96/warc/CC-MAIN-20240425063334-20240425093334-00217.warc.gz | CC-MAIN-2024-18 | 2,125 | 12 |
https://tracker.moodle.org/browse/MDL-50573 | code | While MDL-50176 got in, we saw performance regression and stronk7's investigation revealed some interesting points (See MDL-50176 for details).
It will be nice to lazy load google library for optimization.
Youtube repository no longer working
Use PSR standard to autoload Google classes
Change the approach of the PSR autoloading and allow PSR-4
Error rendering 'clockify-timesheets-time-tracking-reports:timer-sidebar'. Please contact your Jira administrators. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816586.79/warc/CC-MAIN-20240413051941-20240413081941-00088.warc.gz | CC-MAIN-2024-18 | 461 | 6 |
https://mike-naylor.blogspot.com/2012/04/knight-maze.html | code | From the Infinity conference a few years back in Ann Arbor, Michigan. I gave a talk on knight mazes and began the talk with a puzzle. You are a chess knight. Start anywhere you like on the left edge of this board and reach the star by moving only legal knight moves on the white squares. You may not leave the board or land on a blue square.
It's a maze I designed especially to be difficult. Give it a try! The talk focused on interesting maze elements and concluded with an analysis of this maze and the key to why it is so challenging.
Post a Comment | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00271.warc.gz | CC-MAIN-2023-14 | 553 | 3 |
https://opensourcelibs.com/libs/chip-seq | code | 37 Open Source Chip Seq Software Projects
Free and open source chip seq code projects including engines, APIs, generators, and tools.
Dolphinnext 81 ⭐
A graphical user interface for distributed data processing of high throughput genomics
Reg Gen 63 ⭐
Regulatory Genomics Toolbox: Python library and set of tools for the integrative analysis of high throughput regulatory genomics data.
Haystack_bio 36 ⭐
Haystack: Epigenetic Variability and Transcription Factor Motifs Analysis Pipeline
Seq2science 81 ⭐
Automated and customizable preprocessing of Next-Generation Sequencing data, including full (sc)ATAC-seq, ChIP-seq, and (sc)RNA-seq workflows. Works equally easy with public as local data.
Bohdan Khomtchouk Microscope 20 ⭐
ChIP-seq/RNA-seq analysis software suite for gene expression heatmaps
Sevenc 11 ⭐
7C: Computational Chromosome Conformation Capture by Correlation of ChIP-seq at CTCF motifs
Ccbr Pipeliner 36 ⭐
An open-source and scalable solution to NGS analysis powered by the NIH's Biowulf cluster.
Ancetran 10 ⭐
AnceTran2.0: R package for transcriptome evolution analysis based on RNA-seq expression data or ChIP-seq TF-binding data | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00102.warc.gz | CC-MAIN-2022-27 | 1,162 | 18 |
https://www.mail-archive.com/[email protected]/msg01786.html | code | Awesome work Jeremy! Thank you and the team for keeping this moving forward.
1. Create/Read/Update/Delete delivery services * Agree 2. Manage DS regexes * Many customers CNAME to us, so we need this to be configurable at some level * I think we should restrict users when entering additional host names to explicit full domain names, ie not allow them to enter regex matching patterns * This restricts the overlap potential and allows for collision checks against existing explicit names and regexes * Allow internal staff to enter regex matching patterns 3. Manage DS SSL keys (if applicable) * Agree, let users manage keys * Would like to allow users to upload their own certs as well 4. Manage DS URL sig keys (if applicable) 1. We are making the “sig-anchor” a configurable option in the next release so they need to be able to manipulate this 2. We also need to give them access to manipulate the URL signing bypass parameters which are regex values, we typically bypass signing for crossdomain.xml and clientaccesspolicy.xml or for a particular directory like where they store their thumbnails. 5. Manage DS URI signing keys (if applicable) 1. Agree we should let them manage the keys 6. Manage DS targets (steering* only) 1. Don’t have an opinion on this, haven’t used it. 7. Creating DS invalidate content jobs 1. I want to eventually add purge or true delete as an option but otherwise this is good. 8. Manage DS / cache assignments 1. Agree, should only be controlled by CDN operations Ryan Durfey M | 303-524-5099 CDN Support (24x7): 866-405-2993 or [email protected]<mailto:[email protected]> From: Jeremy Mitchell <[email protected]> Reply-To: "[email protected]" <[email protected]> Date: Friday, February 2, 2018 at 1:50 PM To: "[email protected]" <[email protected]> Subject: Delivery Service Self-Service As we move in the direction of self-service, there are a few obstacles that need to be overcome and I'd like to discuss them a bit so grab a cup of coffee... When I say self-service, what I really mean is "delivery service self-service" or the ability to manage your own delivery services (as dictated by tenancy) and everything related to those delivery services. "Everything" includes the following (afaik): 1. Create/Read/Update/Delete delivery services 2. Manage DS regexes 3. Manage DS SSL keys (if applicable) 4. Manage DS URL sig keys (if applicable) 5. Manage DS URI signing keys (if applicable) 6. Manage DS targets (steering* only) 7. Creating DS invalidate content jobs 8. Manage DS / cache assignments If you can't do 1-7 yourself, it's not really self-service is it? #8 is debatable. I'll discuss each one: 1. Create/Read/Update/Delete delivery services Ideally, you could CRUD your own delivery services but our system has some limitations. A) Our CDN is not properly insulated from bad DS configurations. If a user enters a bad value, bad things could potentially happen to a cache or worse the whole CDN. B) Certain DS configuration changes requires queue updates and/or snapshot for the change to take effect. We're not ready (nor will we ever be probably) to let normal users queue updates and/or snapshot. So in the interim, we're working on the ability to allow normal users to create "delivery service requests" to facilitate creating/updating/deleting a delivery service. These "requests" will have to be reviewed/fulfilled by a higher level user (Ops or Admin) who can then queue/snapshot if needed. 2. Manage DS regexes Here's an explanation of this: http://traffic-control-cdn.readthedocs.io/en/latest/admin/traffic_ops/using.html#delivery-service-regexp Currently, this requires the Operations role and for good reason. The danger here involves the risk of a normal user entering a bad regex. For example, it is my understanding that the regex in position zero needs to always follow this format: .*\.foo\..*. Maybe with some better API validation we could let normal users manage DS regexes....or maybe these end up going away in favor of something better/easier...not sure yet... 3. Manage DS SSL keys SSL keys are only applicable where protocol > 0 (HTTPS, HTTP AND HTTPS, or HTTP TO HTTPS) and currently, to manage them requires the Admin role. Why? I'm not sure. Is their harm in letting normal users manage their own SSL keys? 4. Manage DS URL sig keys URL sig keys ( http://traffic-control-cdn.readthedocs.io/en/latest/admin/traffic_ops/using.html#generate-url-sig-keys) are only applicable where signingAlgorithm = 'url_sig' and currently, to manage them requires NO role apparently (only a tenancy check is performed). This is the 1st one on the list that a "normal" user can do currently. 5. Manage DS URI signing keys URI signing keys are only applicable where signingAlgorithm = 'uri_signing' and currently, to manage DS URI signing keys requires the Admin role. Is it necessary to restrict this functionality to Admins only or can we allow normal users to mange URI signing keys for their DS's? 6. Manage DS targets (steering* only) Here's an explanation of this: http://traffic-control-cdn.readthedocs.io/en/latest/admin/quick_howto/steering.html?highlight=steering Currently, to manage DS targets requires the Admin or Steering role. Is there any harm in allowing a normal user to "steer" their delivery service to another delivery service as long as the target delivery service falls in their tenancy? 7. Creating DS invalidate content jobs http://traffic-control-cdn.readthedocs.io/en/latest/admin/traffic_ops/using.html?highlight=invalidate#invalidate-content You can currently do this for your own DS's. This is the 2nd one on the list that a "normal" user can do currently. 8. Manage DS / cache assignments This is the debatable one. To provide true delivery service self-service should a user have the ability to determine which caches are assigned to their delivery service. I'm thinking NO. Currently, this action requires the Ops role and I'm in favor of leaving it that way... In summary, to provide true delivery service self-service I think we need to do a few things: 1. Introduce DS requests until the day in which DS configurations can be guaranteed and queue updates/snapshot becomes a thing of the past. (this is in progress) 2. Revisit DS regexes or make their management more fool proof. 3. Tweak the roles of these actions. Currently, a lot of these things are reserved for Ops/Admin. We have to change that or full DS self-service if not possible. I'd like to make each of these things accessible to users with the "Portal" role with the exception of #8. Thoughts? Concerns? Funny jokes? Jeremy | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746386.1/warc/CC-MAIN-20181120110505-20181120132505-00194.warc.gz | CC-MAIN-2018-47 | 6,629 | 2 |
https://www.coderanch.com/t/501947/databases/trouble-understanding-bi-directional-relationships | code | To be honest with you, I have spent considerable amount of time in understanding the associations (specially bidirectional) but still not really able to understand the basics. I could not get a simple one to many bidirectional mapping to work and I need your help in understanding the mistake and a fix.
I have two classes Stock and StockDailyRecords. (These examples are taken straightly from example. My stock class is defined as below
Similarly StockDailyRecord is as follows
My entire mapping files are as below
I'm facing two issues here
1) if I have same column in Set element of Stock and Many to One element of StockDailyRecord, I'm getting an exception
I'm not sure why I should make insert and update false here.
If I manage to change the name of the either of the columns say, from STOCK_FK_ID to STOCK_ID (Which does not make any sense to me here? Why?), I get another error
while trying to persist Stock and StockDailyRecord.
My client looks like below
create(Object) method looks as below
How do I persist both the records. When I run my client, I'm successful in persisting only Stock, but not StockDailyRecord. How do I make sure that upon persisting Stock, it should also persist StockDailyRecord without using Cascase option.
Does it make any difference in using "inverse" attribute ? | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00618.warc.gz | CC-MAIN-2023-06 | 1,302 | 13 |
https://peatec.eu/sql-server-2005-administration-pdf.html | code | The book looks at day-to-day administration, best practices, administration tips, and step-by-step configurations based on real-world examples found in the industry.
Automate SQL Server routine maintenance, encrypt SQL Server data and communications, including setting up a Certificate Authority.
Part V Disaster Recovery and High Availability (located server online) 17 Backing Up and Restoring the SQL Server 2005 Environment 597 18 Administering administration and Managing Failover Clustering 647 19 Administering and Managing Database Mirroring 691 20 Administering and Managing Log Shipping 721.
Book server Description: Microsoft SQL Server 2005 Management and Administration, based on Service Pack 2, addresses the challenges database administrators regularly encounter on SQL Server 2005 by providing detailed guidance in the areas of management, administration, security, and monitoring.Part II Managing SQL Server 2005 7 Conducting a SQL Server 2005 Health Check 261 8 SQL Server 2005 Maintenance Practices 289 9 Managing and Optimizing SQL Server 2005 Indexes 317 10 Managing Full-Text Catalogs 353 11 Creating Packages and Transferring Data 387.On the Web: Download bonus chapters from m/title/, introduction 1, part I Administering SQL Server Components 1 Administering SQL Server 2005 Database Engine 11 2 Administering SQL Server 2005 Analysis Services 67 3 Administering SQL Server 2005 Reporting Services 99 4 Administering SQL Server 2005.Monitor a SQL Server 2005 infrastructure with Operations Manager 2007, including how to configure the SQL Server Management Pack and install Operations Manager 2007.Harden a SQL Server implementation, implement SQL Server highavailability alternatives, such as Failover Clustering, administration Log Shipping, Database Mirroring, and Replication.Understand how to, configure and tune the Database Engine, Reporting Services, server Analysis Services, Integration Services, and Notification Services.If you own the copyright to this book and it is wrongfully on our website, we offer a simple dmca procedure to remove your content from our site.The following chapters are located online: Part IV SQL Server 2005 Overview (located online) 15 SQL Server 2005 Technology Primer Tools of the Trade 571.It may takes up to 1-5 minutes before you received.With that in mind, a team of experienced Microsoft Certified Professionals provides you with the necessary information to be a more competent and successful database developer or administrator. Create Integration Services packages and transfer data.
File: PDF,.11 MB, the file will be sent to selected server email address.
Understand how to * Configure and titler tune the Database Engine, Reporting Services, server Analysis Services, Integration Services, and Notification Services * Harden a SQL converter Server implementation * Implement SQL Server highavailability alternatives, such as Failover Clustering, Log Shipping, Database Mirroring, and Replication * Monitor a SQL.The file will be sent to your Kindle account.Please note you've to add our email to approved e-mail addresses.Microsoft SQL Server 2005 titler Management and Administration, based on Service Pack 2, addresses the challenges database administrators administration regularly encounter on SQL Server 2005 by providing detailed guidance innovation in the areas of management, administration, security, and monitoring.Beginning SQL Server 2005 Programming, beginning SQL Server 2005 Programming identifying Robert Vieira Beginning SQL Server 2005 Programming Beginning SQL Server.Qxp 9/27/06 9:26 PM Page iii SQL Server 2005 Bible Paul Nielsen 01_542567 ffirs.With coverage of the new features and functionality of SQL Server 2005 Service Pack 2, this book is designed to be comprehensive, resulting in something for all database administrators from simple tips to tactical solutions. Report copyright / dmca form, recommend Documents.
With coverage of the new features and functionality of SQL Server 2005 Service Pack 2, this book is designed to be comprehensive, resulting in something for all database administratorsfrom simple tips to tactical solutions.
Year: 2006, language: russian, pages: 522.
Sign In, our partners will collect data and use sql server 2005 administration pdf cookies for ad personalization and measurement. | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571027.62/warc/CC-MAIN-20190915093509-20190915115509-00512.warc.gz | CC-MAIN-2019-39 | 4,316 | 9 |
https://forum.mikrotik.com/viewtopic.php?f=2&t=75552&p=381225 | code | I am trying to set something up for my scenario. Some customers are trying to connect to my PPPoE NAS (Mikrotik) and they are not allowed by Radius for any reason. What I would like to do is: let this guy in with some specific parameters like a Pool-Name so it get an IP Address for a pool with is "access denied".
The main reason is to let this guy Authenticate into my network but it will be still unAuthorized to use it (because this pool is a blocked pool, for example) but this customer would stop asking connection to my NAS.
Is it possible? | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409171.27/warc/CC-MAIN-20200530102741-20200530132741-00180.warc.gz | CC-MAIN-2020-24 | 547 | 3 |
https://readeasy.org.uk/428/ | code | Welcome to our new news blog! This is Sue and Harry in the Read Easy UK office in Blockley.
We have all been very busy over the last few months developing our growing organisation (involving some steep learning curves with some of the things that come less naturally!)
After a lot of effort in preparing the CIO application at the end of 2012, we were really delighted to hear at the end of March that our application had been accepted and we are now registered as a Charitable Incorporated Organisation with the Charity Commission (charity number: 1151288).
Many thanks to the old Board of Trustees and a very warm welcome to the new! | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378043.81/warc/CC-MAIN-20210307170119-20210307200119-00190.warc.gz | CC-MAIN-2021-10 | 635 | 4 |
https://forums.macrumors.com/threads/please-help-lost-misplaced-ipod.1822172/ | code | Hello This is a question about a lost Ipod but I guess I can post it here as well since it's more of a general question. What I would really like to know is if it's possible to send iMessage from an Ipad to an Ipod if the the Ipod is turned off or without WIFI. ( I don't own an Iphone, the iMessage on my apple ID can only communicate with my Ipod and it only has wifi connection.) My Ipod got lost yesterday (or misplaced somewhere in my house???). I tried to send iMessage to my ipod and all messages was delievered according to my friends' Ipad. Would this be possible if my Ipod was off or without WIFI? If NO, then it's probably still in my house somewhere.. To day I changed my itunes password and now it's no longer possible to send iMessages to my ipod. BUT I don't know if it's due to me changed password OR if it's due to my Ipod being turned off/flat battery or no WIFI... Any ideas? Thanks! | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826892.78/warc/CC-MAIN-20181215152912-20181215174912-00316.warc.gz | CC-MAIN-2018-51 | 903 | 1 |
http://www.jdom.org/pipermail/jdom-interest/2000-November/002977.html | code | [jdom-interest] Re: passing doc through socket
jozart at csi.com
Thu Nov 16 06:25:05 PST 2000
I tracked down the problem I was having passing a document through a
The problem is that several XML parsers *close* the input stream when
they read eof (-1).
This is true of Xerces, which is JDOM's default parser. It is also true
Unfortunately, closing a SocketInputStream closes the underlying
SocketImpl, setting the file descriptor to null.
To workaround, protect your socket's input stream with an InputStream
wrapper that doesn't close the underlying stream, or read everything
into a buffer before handing off to the JDOM builder.
byte buf = new byte[length];
InputStream in = new ByteArrayInputStream(buf);
----- original message -----
Alex Brud alex at ispfocus.com
Wed, 18 Oct 2000 14:49:37 +0200
i'm trying to path a document converted to outputStream through a
to send the document im using (at the client end):
XMLOutputter outputter = new XMLOutputter();
outputter.output(doc,out) as 'out' == ItsSocket.getOutputStream;
to receve the stream , im using (at the server end):
DOMBuilder builder = new DOMBuilder();
doc = builder.build(itsSocket.getinputStream);
but it seems that the process is "stuck" when building the document.
the process never get beyong the builder line.
More information about the jdom-interest | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864337.41/warc/CC-MAIN-20180622010629-20180622030629-00455.warc.gz | CC-MAIN-2018-26 | 1,323 | 27 |
http://www.tomshardware.com/forum/317051-33-display-kind-problem | code | I have a question to you! Sometimes i can see something like THIS on the monitor.
Win xp sp3
I tried a lot of things, like driver,bios update, but it still wrong.
So the main problem is that, if i use lot of program, and i do not minimize the programs, the pc or vga or monitor or i do not know what cause this problem. But if i minimize all program and use only one in full size there is no problem.
Any idea what should i do? | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00093-ip-10-147-4-33.ec2.internal.warc.gz | CC-MAIN-2014-15 | 427 | 5 |
https://help.sap.com/doc/saphelp_nw75/7.5.5/en-US/52/37fbb7ea264dbfb5f4389aaebb9213/content.htm?no_cache=true | code | You can save an InfoObject under a different name in order to create a copy of it.
- You have opened an InfoObject in the editor. Choose .
- If required, select a different InfoArea to save the InfoObject under.
- Give it a different name and a description.
- Choose Finish. A copy of the InfoObject is created and is opened in the editor.
To make sure that the texts are copied in all languages when saving the InfoObject under a different name, see SAP Note 2326441 | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00600.warc.gz | CC-MAIN-2022-49 | 467 | 6 |
http://blog.elangroup-software.com/2013/11/review-of-instant-zeptojs-by-ian-pointer.html | code | The book will be useful for persons who have some previous experience with jQuery. No prior knowledge about Zepto.js is required.
Ian Pointer starts with describing of several ways of getting Zepto.js and checking it is working correctly for you. Then author talks about API of library comparing it to jQuery.
One of the most interesting features of Zepto.js is animation and Ian Pointer gives several simple examples of animation in his book. Also support of touches and gestures is described.
There is also some more advanced material in the book. It includes instruction how to build your own custom version of Zepto.js and introduction into Zepto.js plugins. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123046.75/warc/CC-MAIN-20170423031203-00354-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 662 | 4 |
https://pwfinancing.com/article/public-finance-fiscal-rules-and-public-private-partnerships-lessons-for-post-covid-19-investment-plans/ | code | Article by Alessandra Cepparulo, Giuseppe Eusepi and Luisa Giuriato
Public finances in the EU countries are strained by the COVID-19 pandemic and the war in Ukraine while public debts and deficits have reached unprecedented levels. At the same time, in the coming years governments will need to provide the infrastructure required by the digital and climate transitions and recover their backlog of investments. These policy choices will deeply impact on the future development of European countries and the dynamics of their public finances, which will be constrained also by restored or revised fiscal rules. All this makes PPPs attractive for public authorities in search of alternatives to increase investment resources without further worsening their budgetary position. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476211.69/warc/CC-MAIN-20240303075134-20240303105134-00356.warc.gz | CC-MAIN-2024-10 | 775 | 2 |
https://learn.saylor.org/course/view.php?id=26§ionid=9616 | code | Read Section 4.3 (pages 75-77). You read this section before to become acquainted with the Squeeze Theorem. When you read the section again, pay particular attention to the geometric argument used to set up the application of the Squeeze Theorem.
Work through problems 1-7 for Exercise 4.3. When you are done, check your answers against Appendix A. | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704800238.80/warc/CC-MAIN-20210126135838-20210126165838-00726.warc.gz | CC-MAIN-2021-04 | 348 | 2 |
http://web2.sys-con.com/node/2503413 | code | |By Business Wire||
|January 7, 2013 05:45 PM EST||
From the 2013 International Consumer Electronics Show (CES), Samsung Telecommunications America (Samsung Mobile), the No. 1 mobile phone provider in the United States and the No. 1 smartphone provider worldwide1, today announced the Samsung ATIV Odyssey™ will be available in the coming weeks at Verizon Wireless Communications Stores and online.
The ATIV Odyssey runs on the sleek and intuitive Windows Phone 8 operating system, the only one with Live Tiles, designed to keep you closer to the people and things that matter most. (Photo: Business Wire)
The ATIV Odyssey boasts a brilliant 4-inch Super AMOLED™ touchscreen display (800x480) that allows users to watch movies, view pictures and play games in high resolution. This compact, versatile smartphone is Global Ready™, giving users the ability to call and email from more than 220 countries in the world. The ATIV Odyssey is equipped with a 1.5 GHz dual-core processor to quickly access the Internet, stream music and download content at blazing speeds using the Verizon Wireless 4G LTE network.
The ATIV Odyssey runs on the sleek and intuitive Windows Phone 8 operating system, the only one with Live Tiles, designed to keep you closer to the people and things that matter most. The ATIV Odyssey also features unique Samsung sharing applications such as Photo Editor, Mini Diary and Now, an application that provides weather, news, stock and currency updates instantly.
The ATIV Odyssey is enterprise ready with enhanced security features to offer customers an extremely powerful business tool that keeps sensitive company data secure. Security solutions include advanced Microsoft® Exchange ActiveSync® features and policy control and on-device AES 256-bit encryption.
Key Product Features:
- Super AMOLED Display (WVGA 800x480)
- 5-megapixel rear-facing camera with LED flash and full 1080p recording and 1080p playback
- Front-facing 1.2-megapixel camera for video chat
- Global Ready calling and email capabilities in more than 220 countries in the world
- Wi-Fi connectivity (802.11 a/b/g/n)
- 8 GB on board storage (actual formatted capacity is less) / 1 GB RAM
- Support for up to 64 GB microSD™ card
- 2100 mAh battery
- Photo Editor – Edit photos (crop, rotate, resize, etc.) and adjust colors (contrast, RGB and hue) instantly; users can also add effects such as Pop Art, gray-scale, red eye correction and add decorations that include frames, stickers, etc. and share on Facebook, Picasa and Photobucket.
- Mini Diary – Contains daily activities, photos and drawings; users can share their diary via social channels including Facebook, email, Picasa and Photobucket; users can back up and store information in the free cloud service SkyDrive.
- Now – Provides weather, news, stock and currency updates instantly
Windows Phone 8 Features:
Live Tiles are the heart and soul of Windows Phone, and no other phone has them. People can arrange the iconic Start screen however they want by pinning their favorite people, apps, music, games, photos and more. In addition to Live Tiles, Windows Phone 8 offers a range of new features to make your smartphone experience even more personal, including:
- The only phone with Live Apps. Live Apps bring information right to the Start screen, such as the Groupon deal of the day, flight information and news headlines. With Windows Phone 8, Live Apps such as Facebook can even deliver real-time information right to your lock screen with updated wallpaper.
- Top apps. The Windows Phone Store has more than 120,000 quality apps and games, including hits such as “Angry Birds Star Wars,” “Cut the Rope,” Disney’s “Where’s My Water,” LivingSocial, Urbanspoon and many more. Pandora, the leading Internet radio service, is also coming to Windows Phone in early 2013.
- Kid’s Corner. Exclusive to Windows Phone 8, Kid’s Corner is a way to share your phone with your kids, so they can play “Angry Birds” without texting your angry boss. Parents can now hand over their phones to the kids without worrying about deleted photos, misdirected emails, unapproved purchases or accidental phone calls. After a simple setup, parents can activate a specialized place on the phone for kids to play — complete with their own customizable Start screens — where they can access only the apps, games, music and videos picked by parents.
Editor’s Note: The ATIV Odyssey will be on display at CES 2013 in the Samsung booth (Las Vegas Convention Center, Central Hall, Booth #12004).
1 Samsung Mobile is the No. 1 mobile phone provider in the United States according to Strategy Analytics, North America Handset Vendor Marketshare, Q2 2012. Samsung Electronics Company is the No. 1 smartphone provider worldwide according to Strategy Analytics Global Smartphone Vendor Market Share by Region: Q2 2012.
Samsung, ATIV, and Super AMOLED are all trademarks of Samsung Electronics Co., Ltd.
About Samsung Telecommunications America
Samsung Telecommunications America, LLC, (Samsung Mobile) a Dallas-based subsidiary of Samsung Electronics Co., Ltd., researches, develops and markets wireless handsets, wireless infrastructure and other telecommunications products throughout North America. For more information, please visit www.samsung.com.
About Samsung Electronics Co., Ltd.
Samsung Electronics Co., Ltd. is a global leader in consumer electronics and the core components that go into them. Through relentless innovation and discovery, we are transforming the worlds of televisions, smartphones, personal computers, printers, cameras, home appliances, medical devices, semiconductors and LED solutions. We employ 227,000 people across 75 countries with annual sales exceeding US$143 billion. Our goal is opening new possibilities for people everywhere. To discover more, please visit www.samsung.com.
SYS-CON Events announced today that Roundee / LinearHub will exhibit at the WebRTC Summit at @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LinearHub provides Roundee Service, a smart platform for enterprise video conferencing with enhanced features such as automatic recording and transcription service. Slack users can integrate Roundee to their team via Slack’s App Directory, and '/roundee' command lets your video conference ...
Sep. 29, 2016 05:45 AM EDT Reads: 1,500
Digital transformation is too big and important for our future success to not understand the rules that apply to it. The first three rules for winning in this age of hyper-digital transformation are: Advantages in speed, analytics and operational tempos must be captured by implementing an optimized information logistics system (OILS) Real-time operational tempos (IT, people and business processes) must be achieved Businesses that can "analyze data and act and with speed" will dominate those t...
Sep. 29, 2016 05:15 AM EDT Reads: 1,242
IoT is fundamentally transforming the auto industry, turning the vehicle into a hub for connected services, including safety, infotainment and usage-based insurance. Auto manufacturers – and businesses across all verticals – have built an entire ecosystem around the Connected Car, creating new customer touch points and revenue streams. In his session at @ThingsExpo, Macario Namie, Head of IoT Strategy at Cisco Jasper, will share real-world examples of how IoT transforms the car from a static p...
Sep. 29, 2016 05:00 AM EDT Reads: 1,635
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Sep. 29, 2016 04:30 AM EDT Reads: 4,678
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace.
Sep. 29, 2016 04:00 AM EDT Reads: 1,211
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business in 2016. However, IoT is far more complex than most firms expected. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, a renowned visionary and thought leader, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology and business models to adopt and leverage IoT. He will drill down to the components in this fra...
Sep. 29, 2016 04:00 AM EDT Reads: 1,813
SYS-CON Events announced today that ReadyTalk, a leading provider of online conferencing and webinar services, has been named Vendor Presentation Sponsor at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. ReadyTalk delivers audio and web conferencing services that inspire collaboration and enable the Future of Work for today’s increasingly digital and mobile workforce. By combining intuitive, innovative tec...
Sep. 29, 2016 03:30 AM EDT Reads: 3,033
I'm a lonely sensor. I spend all day telling the world how I'm feeling, but none of the other sensors seem to care. I want to be connected. I want to build relationships with other sensors to be more useful for my human. I want my human to understand that when my friends next door are too hot for a while, I'll soon be flaming. And when all my friends go outside without me, I may be left behind. Don't just log my data; use the relationship graph. In his session at @ThingsExpo, Ryan Boyd, Engi...
Sep. 29, 2016 03:30 AM EDT Reads: 1,377
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
Sep. 29, 2016 03:15 AM EDT Reads: 1,906
IoT offers a value of almost $4 trillion to the manufacturing industry through platforms that can improve margins, optimize operations & drive high performance work teams. By using IoT technologies as a foundation, manufacturing customers are integrating worker safety with manufacturing systems, driving deep collaboration and utilizing analytics to exponentially increased per-unit margins. However, as Benoit Lheureux, the VP for Research at Gartner points out, “IoT project implementers often ...
Sep. 29, 2016 03:15 AM EDT Reads: 3,578
There is growing need for data-driven applications and the need for digital platforms to build these apps. In his session at 19th Cloud Expo, Muddu Sudhakar, VP and GM of Security & IoT at Splunk, will cover different PaaS solutions and Big Data platforms that are available to build applications. In addition, AI and machine learning are creating new requirements that developers need in the building of next-gen apps. The next-generation digital platforms have some of the past platform needs a...
Sep. 29, 2016 03:00 AM EDT Reads: 1,880
If you’re responsible for an application that depends on the data or functionality of various IoT endpoints – either sensors or devices – your brand reputation depends on the security, reliability, and compliance of its many integrated parts. If your application fails to deliver the expected business results, your customers and partners won't care if that failure stems from the code you developed or from a component that you integrated. What can you do to ensure that the endpoints work as expect...
Sep. 29, 2016 02:45 AM EDT Reads: 1,718
As ridesharing competitors and enhanced services increase, notable changes are occurring in the transportation model. Despite the cost-effective means and flexibility of ridesharing, both drivers and users will need to be aware of the connected environment and how it will impact the ridesharing experience. In his session at @ThingsExpo, Timothy Evavold, Executive Director Automotive at Covisint, will discuss key challenges and solutions to powering a ride sharing and/or multimodal model in the a...
Sep. 29, 2016 02:30 AM EDT Reads: 532
Is your aging software platform suffering from technical debt while the market changes and demands new solutions at a faster clip? It’s a bold move, but you might consider walking away from your core platform and starting fresh. ReadyTalk did exactly that. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, will discuss why and how ReadyTalk diverted from healthy revenue and over a decade of audio conferencing product development to start an innovati...
Sep. 29, 2016 02:30 AM EDT Reads: 2,178
WebRTC adoption has generated a wave of creative uses of communications and collaboration through websites, sales apps, customer care and business applications. As WebRTC has become more mainstream it has evolved to use cases beyond the original peer-to-peer case, which has led to a repeating requirement for interoperability with existing infrastructures. In his session at @ThingsExpo, Graham Holt, Executive Vice President of Daitan Group, will cover implementation examples that have enabled ea...
Sep. 29, 2016 02:00 AM EDT Reads: 1,594
SYS-CON Events announced today that Numerex Corp, a leading provider of managed enterprise solutions enabling the Internet of Things (IoT), will exhibit at the 19th International Cloud Expo | @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Numerex Corp. (NASDAQ:NMRX) is a leading provider of managed enterprise solutions enabling the Internet of Things (IoT). The Company's solutions produce new revenue streams or create operating...
Sep. 29, 2016 01:45 AM EDT Reads: 2,058
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
Sep. 29, 2016 01:00 AM EDT Reads: 2,311
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Sep. 28, 2016 11:30 PM EDT Reads: 3,976
According to Forrester Research, every business will become either a digital predator or digital prey by 2020. To avoid demise, organizations must rapidly create new sources of value in their end-to-end customer experiences. True digital predators also must break down information and process silos and extend digital transformation initiatives to empower employees with the digital resources needed to win, serve, and retain customers.
Sep. 28, 2016 09:15 PM EDT Reads: 381
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lea...
Sep. 28, 2016 08:15 PM EDT Reads: 374 | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661781.17/warc/CC-MAIN-20160924173741-00256-ip-10-143-35-109.ec2.internal.warc.gz | CC-MAIN-2016-40 | 16,403 | 71 |
https://www.arrse.co.uk/community/threads/hmmmmmm.119187/ | code | This is a stand-to for an incoming competition, one of our most expensive yet.
Later this week we're going to be offering the opportunity to Win £270 Rab Neutrino Pro military down jacket
Visit the thread at that link above and Watch it to be notified as soon as the competition goes live
I think downgraded personnel should be promotable but only in echelon roles. What's the use of having a Company Sergeant Major if he can't do his job because he has a hand missing or is deaf as a post? It is wrong to bar them from promotion altogether but it is also wrong to put them in jobs they can't carry out to a high standard. | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00445.warc.gz | CC-MAIN-2018-43 | 623 | 4 |
https://dev.cocosbcx.io/docs/technical-features | code | Whether game algorithms on blockchains have practical value is related to on-chain randomness. Research reveals that one key problem should be solved for complete on-chain randomness. The algorithms of on-chain randomness are described by smart contracts and the contract process is public. If random results that cannot be figured out by third parties are to be generated, there should be noises from nodes in the input into the process during contract execution. However, the noises from different nodes may vary. That is, other nodes cannot execute the contract again to validate whether the randomness is correct, leading to failure in consensus.
We propose three feasible solutions to address this problem.
Solution Ⅰ: One or multiple random data pools are reserved in the dynamic data zone of blockchain. The block producers wrap the random results in the encrypted data segments of the block and release the closed-source codes of encryption. In this case, all the nodes shall have the same set of random data pools whose data is in pipe form with read and write sides and which are accessible to the read and write sides in line with the algorithms and first-in first-out.
Since transaction processing is consistent across all nodes in the blockchain, applications may read the random results from the random data pools during application. Under such a mechanism for generation and distribution, the security of process and result meets the requirement from the blockchain network:
• Since transaction processing is consistent across all nodes in the blockchain, applications may read the result of randomness from the random data pools during application. Under such a mechanism for generation and distribution, the security of process and results meets the requirement from the blockchain network.
• Any access (read and write) shall cause irreversible changes to the random data pools. l
• Writing random data shall be completed by closed-source dynamic encryption library.
Random data generators shall not know where the random results are placed in the random data pools and who shall use the random process.
This solution is applicable to scenes where the chain network shows consistency in the order of transaction processing. For instance, in RPG games, players open the map treasure chest for random items.
A delegation mechanism allows some transactions to be delegated to some trustable node cluster for processing. The cluster allocates randomly online trustable nodes to execute the transactions. After execution, the trustable nodes shall record the random results. The client shall obtain the results via the informing or polling mechanism.
Since this solution is based on the chain transaction delegation mechanism, the changes to chain shall be smaller than those in solution. However, to ensure the feasibility, the following requirements shall be met:
• The trustee shall pass trusted execution environment validation to ensure their reliability.
• When the trustee execute the randomness and release the results, the same safe encryption library shall be used. For the transfer of encrypted data, zero-knowledge proof or other schemes shall be used to prove the identity of the delegates and shall be recognizable to the clients, so as to ensure that the data obtained by the clients are not forged by third parties.
This solution is applicable to transactions that engage multiple parties but need only one batch of results of randomness, like the order of shuffle in chess and card games.
The current block producers receive random transactions, generate a result via the random function, encrypt the randomness and results, write them to the block data, pack it and send it to the whole network. Other ordinary nodes accept and use the result, reaching a consensus on the random transaction. This solution is applicable to lottery draws in games, like throwing the dice for a random result.
The light nodes make it possible for whole games as contracts on chain. The local game contracts can be executed in the light nodes continually over time, independent of the block period and block size. The execution is only related to the sub-contracts that include consensus in the game contracts.
Based on the syntax identifiers of priority of consensus, the game contracts and sub-contracts adopt asynchronous and synchronous consensus respectively to validate and synchronize key steps during continual execution. In this way, a mechanism is built for continual execution of game contracts and witness of consensus on results.
here is an on-chain interface for session. It creates a user session list with active restriction in the public data zone. The users in the session interval have the power to push transactions to other uses in the same interval. When other users receive the notice on data changes, they shall obtain corresponding data in time.
For traditional blockchain, transaction execution is validated by writing the results to block data when the nodes receive block data, interpret and run the transaction, and obtain the validation of results. When a transaction is submitted, it actually joins the pending queue. It will not be executed until the next block period. Such a mechanism prevents transactions from being responded to and processed in time.
Cocos-BCX comes with a syntax priority identifier for transactions. When a transaction is identified as immediate validation, the nodes will submit, process and broadcast the transaction immediately. The block generation and transaction execution become asynchronous for rapid asynchronous validation. Immediate response is another consensus priority. With it, response to transaction is hardly delayed. The nodes will submit transactions immediately, which greatly increase the response speed.
To improve the node utilization rate and processing efficiency, Cocos-BCX proposes the design of compartmentalized witness based on delegate witness. That is, some nodes focus on processing specific types of contract requests. The design works as follows.
Compartmentalized Witness Mechanism
In the game industry, compartmentalized witness allows optimization of the processing capability of related nodes based on the type of request. For instance, for centralized requests for floating-point operation, the core hashing power should be strengthened; for centralized requests for data structure processing, the storage IO capability should be enhanced. In this way, the overall efficiency and benefit are optimized.
Low transaction latency
Cocos-BCX allows all the transactions in need of consensus to be processed on chain, including those identified as rapid response and immediate validation. The response to transactions on traditional blockchain depends on block production while the speed of validation is confined by the block period. These fail to meet the requirement from game contracts on immediate validation.
Rapid asynchronous validation
Smart contract supports syntax consensus identifier. When the contract is executed, the transactions that are identified as immediate validation shall be elicited and broadcast immediately. Any nodes that receive the broadcast information shall immediately run the process to get the result and broadcast it. Meanwhile, the block producers in the period shall store the result into the result pool. When the quantity of the same result reaches the threshold for pass, the block producers shall broadcast the result of validation and write the transaction to block cache. The whole process is as follows.
Data processing under asynchronous consensus
Under such a design, the nodes will submit, process and broadcast the transaction immediately. The block generation and transaction execution become asynchronous for rapid asynchronous validation.
In the Cocos-BCX, for a contract identified as immediate response, when users send a transaction request to the nodes, the nodes shall immediately broadcast it to the network and return the hash value to the users. With the design, the final record period does not differ too much from the traditional design but the response to transaction is hardly delayed. The nodes shall submit transactions immediately, which greatly increase the speed of response. Moreover, the hash value shall inform the users of the state of transaction. Meanwhile, the information about transaction shall be updated to the transaction history datasheet and pushed dynamically to the users, who need not wait for the transaction to be validated and used in order to receive the callback for response. With reference to hash tracking in Ethereum, we have added the mechanism for dynamic push of transaction to users.
Cocos-BCX proposes the design of syntax support to compartmentalized consensus processing for contracts. The consensus priority of script may be adorned with specific key words so that the contract interpreter can recognize the identified contracts in need of consensus and, when scanning the whole text, compartmentalize them into sub-contracts for consensus from related nodes in the chain network. The consensus priority includes, in an ascending order, no need for consensus, normal consensus, immediate response and immediate validation.
Light nodes and Principles of contract compartmentalization
Whole contract is executed locally. To execute the parts in need of consensus, consensus methods are determined based on the syntax priority identifiers. Different priority calls for different steps of consensus. The execution of game contracts is smoother, with lower possibility of block waiting and, if any, shorter waiting. The execution and consensus of main contracts identified as immediate validation, the highest priority, are asynchronous. For those identified as immediate response, when the transaction is submitted, the nodes shall immediately return a receipt for submission, that is, the hash value (Tx ID). Transactions identified as normal consensus shall be executed following the normal procedure in blockchain. Those identified as no need of consensus shall be executed on light nodes.
In addition, contracts in need of consensus are compartmentalized into sub-contracts that are then distributed for execution. These sub-contracts shall have complete context and a design without external dependence, so that other nodes can obtain the result correctly.
Delegate-type transactions mainly cover the transactions that are highly random and generate different results when executed by different nodes (like generating a set of random numbers). However, this type is restricted to the transaction request related to non-personal data. With the consensus identifiers, names of node clusters (node groups) that engage delegates in consensus can be defined, specifying which type of nodes shall process the transaction. When only one node is needed (N=1), the designated node cluster shall randomly choose one online node within the cluster to process the transaction, for example, processing the random event. When there is more than one delegated node (N>1) or one cluster, several nodes within the designated cluster shall be distributed to process the transaction. When the delegates that pass trusted execution environment validation received the delegated transaction, they shall validate the feasibility of the transaction, execute delegation, and, upon completion, encrypt the transaction result and broadcast it to the chain.
The design of defining the name of the delegated node cluster is adopted for two reasons. First, to ensure security, only the name of the delegated node is defined and nodes are chosen randomly from within the cluster to process transactions. The delegates do not know the specific delegated nodes, which prevents cheating. Second, to guarantee run-time reliability, the designated node cluster ensures that the transaction is distributed to the nodes that are online. Under this mechanism, the on-chain randomness becomes possible. Users can delegate trustable nodes to generate a random number and delegate trustable nodes to maintain public data of contract. Moreover, the developers shall set up function callback mechanism within the contract under the mechanism.
Exchange of homogeneous digital assets
The exchange of digital game assets and Ethereum ERC20 digital assets is as follows.
Exchange relationship between ERC20 digital assets
Game currency supports transfer of assets via mapping gateway to other consortium chains and independent chains.
Exchange of Non-Homogeneous Digital Assets
Exchange relationship between Cocos-BCX and ERC721 digital assets
BCX-NHAS-1808 is the standard for non-homogeneous digital assets applicable to decentralized distributed ledger network in the use of COCOS tokens. Compatible with other standards for non-homogeneous assets, it allows separation between assets and contracts and features expandable and definable data zones.
RC875 and ERC721 standards are protocols for non-homogeneous digital assets in Ethereum. To some extent, ERC875 standard is more like an upgraded version of a simplified ERC721 standard, which is the very first of its kind. ERC841 and ERC821 standards are its optimized and amended versions while ERC875 standard is simpler and more direct. Its defined functions include name, symbol, balanceOf, transfer, transferFrom, totalSupply, ownerOf and trade. Compared with ERC721 standard, ERC875 standard has simpler functions.
Further expanding the digital asset technologies supported by exchange gateway allows the gateway to support non-homogeneous complex contracts represented by ERC721 and ERC875 standards. The gateway is equivalent to a specific compiler in the exchange of game items and non-homogeneous contracts. By translating and exchanging structured data, it realizes two-way exchange between non-homogeneous contracts and on-chain game items. Compatible with more types of prop exchanges inside and outside the chains, it provides richer game content and user experience.
Prop producing for blockchain games is atomic. The prop producers create items based on the demands, materials and assets submitted by the players. Upon completion, the items are transferred to the players. This process engages a series of operations (OP), including generation of digital assets, setting of prop attributes, and transfer of asset ownership to users. To ensure the consistency in the operational results, we combine the operations into one transaction, that is, one atomic operation. In this case, all the operations inside the transaction will succeed or fail at the same time.
Atomic merges of multi-operation
Another application of atomicity is dis-intermediated assets exchange of Project BCX, aimed to help seller gain more and buyers consume less. The dis-intermediated circulation platform does not store data about users’ assets but act as a medium for marrying node-to-node requests. Game manufacturers can flexibly design their own data structures for game assets. The exchangeable content goes beyond homogeneous assets within games to cover items, equipment, game data and other non-homogeneous assets. When users make a request for sale on the circulation platform, the related game assets (currency or items) shall be locked and cannot be used until the request is cancelled. The request contains the mainchain ID of the seller and the content of the assets to be sold. When the request is fulfilled, the system shall automatically change the assets ownership and transfer to the seller the assets that the buyer has paid. By then, the whole circulation request is completed.
When asset exchange begins, the sale or purchase shall be submitted to the circulation platform in the form of a request. The transfer of assets and the change of ownership shall be deemed as a one-off indivisible operation. In other words, the behaviors of both sides shall be recognized by consensus. If one party’s change in assets is not recognized by the mainchain block, the whole transaction shall be rolled back. That is, the change of asset ownership or transfer of assets throughout the exchange shall be packed inside one transaction. The state of the two acts shall be consistent. When the transaction is completed normally, a unique transaction ID shall be available for checking on chain.
Determining state of atomic transaction
As the core node of transaction processing and communication across the network, BP has quicker access to the results than general nodes and thus higher priority in acquiring some timely or confidential information. This also indicates the possibility of cheating. For instance, the BP may be informed of the outcome of a random process in advance and utilize contract to predict the operation outcome, which is unfair to the games in the public chain and makes them unsafe.
The developers herein refer to individuals and organizations that are capable of chain interaction or transformation, including low-level and contract developers in the chain network. They are able to analyze or control chain information in depth. There is some reason to believe that they can read codes to access the encrypted or concealed details of communication technologies that may be used in transmission and design codes accordingly to obtain the information of random process and sensitive data illegally.
Therefore, BCX comes with mechanisms for transaction execution, information transmission and operation that can prevent BP and developers from cheating, which will be introduced in the following text.
Transmission with dynamic encryption
To prevent sensitive information from being intercepted and deciphered during broadcasting, a safe mode of transmission with dynamic encryption is integrated into the BCX chain network, as demonstrated by the following figure.
Transmission mechanism with dynamic encryption
Take random seed as an example. When a random seed is produced, the producer will encrypt the information with the dynamic data concerning time, block height and other noise input as AES key and broadcast the encrypted information. All the nodes share the same algorithm for dynamic key generation that allows them to correctly decipher the information. The eavesdropping third parties, however, will not be able to do that, which guarantees the security of sensitive data during transmission.
Prevention of connection from custom nodes
Sound transmission security alone cannot prevent node developers from revising programs to output the information they have received and deciphered. Hence, a mechanism for authentication is incorporated to prevent the revised node program from connecting to the chain network, as shown in the following figure.
Node access authentication mechanism
Before release, the BCX node program will be embedded with authentication information that is not included in the open source code yet related to the version number. When a node tries to connect to the chain network and other nodes, the latter will verify whether the information is consistent with that recorded in the chain network and refuse the connection of nodes that fail to pass authentication. In this way, malicious connection from revised node program to the chain network is stopped. Besides, secondary developers can also customize the authentication information in the source code to release their own chain network, which will also be able to prevent non-official code connection.
Since the contract itself is a Turing-complete state machine system, fixed input will derive fixed output and the results will be broadcast to the whole network for synchronization under the transaction mechanism. If the broadcast information is an intermediate process of a series of behaviors, some process variables that should not have been known may be revealed. To address this problem, a contract execution logic that conceals process variables is put forward, as shown in the following figure.
Process variable concealing mechanism
With a reasonable contract design, process variables that involve sensitive data are executed in the same OP, which is completed in the memory of the executing node. What is broadcast is then the OP outcome. In this way, the process variables are concealed throughout the period, with no risk of exposure.
An execution authentication mechanism is added to prevent malicious developers from predicting the possible output of the contract via test use of contract interface. To be specific, only authorized accounts can execute specific contract interface, as shown in the following figure.
Contract mechanism with execution authentication
When the contract receives execution request, the signature in the request shall be verified against the execution permission specified in the contract. Only when the request passes authentication shall the contract function be executed normally. Otherwise, the contract shall be directly terminated without returning the payment to the requestor. With this mechanism, even if the developers know the code of the chain and the contract and the process and principles of execution, they shall not be able to maliciously use specific interfaces. This guarantees that the contact shall not be used at will for prediction of execution outcome.
For information or operation that remains sensitive during a certain period of time (like a board game), this scheme allows execution logic within the period to work in a blackbox mode with protection from the Trusted Execution Environment (TEE) mechanism. Usually the chain network challenges or verifies the blackbox environments on a regular basis via the TEE mechanism to ensure their reliability. Besides, any of them shall be chosen randomly for the execution of the sensitive processes. When the execution is completed, traceable records and outcome shall be submitted online to ensure fair, open and transparent recording on the chain network. It works as follows.
Mechanism for internal execution of sensitive processes
This mechanism prevents developers and BP from participating in the execution of the blackbox processes, thus avoiding their cheating. With the design specified above, COCOS-BCX shall offer a trusted, reliable and safe execution environment for all the businesses that it supports. | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578716619.97/warc/CC-MAIN-20190425094105-20190425120105-00209.warc.gz | CC-MAIN-2019-18 | 22,272 | 70 |
https://www.dice.com/jobs/detail/Lead-Hadoop-Systems-Analyst-Judge-Group%2C-Inc.-Austin-TX-73301/cxjudgpa/888281 | code | Austin, TX Description:
The Judge Group has partnered with theleading global payment processing company who is looking for a Lead Hadoop Systems Analyst! ***Please email for faster consideration***JOB DESCRIPTIONEssential Functions:
• Work closely with Development, Operations and all other cross functional teams across product and technology to drive the DevOps team.
• Design, Develop and maintain data-driven ETL applications and services using Big Data Technologies.
• Lead DevOps engineers and build a culture of engineering excellence.
• Focus on improving the quality, scalability, availability, reliability, resilience of the application services.
• Identify the areas opportunities and propose new solutions which can help in production incidents.
• Focus on the low latency and performance optimization to meet the client SLAs.
• Focus on observability, monitoring to improve the operational and client experience.
• Focus on data security, privacy and protection.
• Provide technical leadership to DevOps teams by participating in design, architecture and process reviews.
• Partner with Leadership in developing the strategy and roadmap that align with business goals.
• Manage the prioritization and delivery of strategic initiatives for enhancements on the platform.
• Maintain ETL data pipelines, enhance, optimize and automate using open source bigdata technologies.
• Perform code reviews and help the team to produce quality code.
• Work and Partner with geographically distributed team.
• Stay informed on technology trends and product roadmaps to make informed solution architecture recommendations.• This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office two days a week, Tuesdays and Wednesdays with a general guidepost of being in the office 50% of the time based on business needs.QUALIFICATIONS
• 10-15years of relevant work experience with a Bachelor's Degree, a Master's Degree; or a minimum of 5 years of relevant work experience a PhD.Preferred Qualifications:
• Bachelor degree in a technical field such as computer science, computer engineering or related field. Advanced degree preferred
• Minimum of 10+ years of software development experience (with a concentration in data centric initiatives), with demonstrated expertise in leveraging development best practice methodologies.
• Minimum of 5+ years of experience in building large-scale applications using open source technologies.
• Design and coding skills with Big Data technologies like Hadoop, Spark, Kafka, Scala, Java, Hive, Data APIs, streaming.
• Relevant experience with DevOps, software design, Data engineering, architecture and ETL data pipeline development life cycle.
• Hands-on expertise with Spark/Scala or Spark/Java or Spark/Python.
• Experience with highly distributed, scalable, concurrent and low latency systems.
• Strong Team player and ability to collaborate as part of virtual team across the globe.
• Deep knowledge of Unix/Linux, Shell Scripting and writing SQL queries.
• Deep knowledge of all Data Warehouse related components (Sourcing, ETL, Data Modeling, Infrastructure, Reporting, Data Visualization) and multiple tools to support those components.
• Experience with Agile methodologies. Work experience in improving the client experience.
• Excellent analytical and problem solving skills with a strong automation and customer mindset
• Must have strong interpersonal and communication skills (written and verbal). Should be comfortable facilitating multi-team activities.
• Payment processing background is a big plus.ADDITIONAL INFORMATION
We have adopted a COVID-19 vaccination policy to safeguard the health and well-being of our employees and visitors. As a condition of employment, all employees based in the U.S. are required to be fully vaccinated for COVID-19, unless a reasonable accommodation is approved or as otherwise required by law. Work Hours: Varies upon the needs of the department.Travel Requirements: This position requires travel 5-10% of the time.
Mental/Physical Requirements: This position will be performed in an office setting. The position will require the incumbent to sit and stand at a desk, communicate in person and by telephone, frequently operate standard office equipment, such as telephones and computers.Contact:
This job and many more are available through The Judge Group. Find us on the web at www.judge.com | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00358.warc.gz | CC-MAIN-2022-40 | 4,535 | 35 |
http://www.cs.technion.ac.il/users/yechiel/c++-faq/exceptions-in-dtor.html | code | (Click here for a personal note from Marshall Cline.)
/ FAQ 11.13
What's the deal with destructors?
What's the order that local objects are destructed?
What's the order that objects in an array are destructed?
Can I overload the destructor for my class?
Should I explicitly call a destructor on a local variable?
Can I call a destructor on a local if I
How do I handle the situation from the previous FAQ?
What if I can't wrap the local in an artificial block?
Explicitly calling dtor for objects allocated with
What is "placement
" and why would I use it?
Should a dtor call the dtors for member objects?
Should a dtor call the dtors for the base class subobject?
Should my destructor throw an exception when it detects a problem?
to allocate memory from a specific memory area?
E-mail the author
About the author
Revised Jul 4, 2012
[11.13] Should my destructor throw an exception when it detects a problem? | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249530087.75/warc/CC-MAIN-20190223183059-20190223205059-00101.warc.gz | CC-MAIN-2019-09 | 909 | 21 |
https://technicalnick.in/tech/top-10-web-designing-languages/ | code | You do not need anyone to explain the importance of websites in today’s world. If you have any kind of business to run and planning for the long term, then you must take care of your digital appearance. People, who do not know anything about languages used while web designing, can face many issues while working with the best web design company to bring their website to higher rankings on search engine result pages.
Because the designers or developers from web design agencies can use keywords like HTML, Java, CSS, etc. Web designers use these phrases to create appealing web designs. Most business owners do not know anything about web design, but they should understand at least the basics of web designing languages used in web design.
Need for Basic Understanding
As you know a website is a crucial component of your business and in all the strategies, so it is important to understand the needs for your website. It is not necessary to know how coding can be done; instead, you can just express your needs to any of the best web design company.
With changes in technology, it helps to understand why designers need to update their site as per current trends or Google updates in web designing.
Top 10 Web Designing Languages
There are varieties of web designing programming languages used by designers or developers. Their preferences are depending on their experience with using different website design languages. Without wasting any time, here is the top ten web designing languages list you will find out there.
A Hypertext Markup Language (HTML) is the fundamental language in web design. This language is dynamic and helps you to create web pages with less code. The first language must know by a web designer is HTML coding for designing a web page. HTML helps to create the layout and structure of your website. With updates in technologies, HTML has become aged and can be used in assisting and supporting other web designing languages. Although it is old and outdated, it is still recognized or used by many coding communities to create beautiful and fabulous websites.
HTML comes into the picture only with Cascading Style Sheets (CSS). This is the backbone of web design. If you have HTML then you must have CSS. These two languages come hand in hand, side by side. HTML gives structure to a website and CSS gives style to your website. CSS takes care of all the minute details like colors, backgrounds, shading, text styling, formatting, and more. This is one of the critical website coding languages as it is not kept with the main code but later imported to the main code.
Java is one of the most used popular server-side programming languages. Designers and developers use this language for large-scale sites that undergoes volume moment from one instance to another. It is mostly used to develop site content, games, apps, and software. Over 15 billion users have Java installed on their devices in some form or other. This language is portable and can run on any platform, so it is mostly preferred to design a website.
Python is one of the versatile, essential, and easiest languages to use and work with for web designing. It is basically used to create frameworks for any website. It is most preferred due to its coding and syntax that helps designers or developers work with and explain to their users. Python language is designed for web servers similarly to Java. The standard library of Python keeps actual code short and simple that can be downloaded into a server and from there it can be imported into the system if required. Pinterest and Instagram are some famous Python websites.
Structured Query Language (SQL) is a database query language used by various businesses to store data in a structured manner. Data management, storage, and accumulation are the major tasks are for any business. SQL is used to manage, store and re-access the data when your website is computing large amounts of records. The data is stored on a cloud server that can be accessed from anywhere in the world. It is used in pairs with other languages to get most of your customer database and website.
PHP is one of the best languages used while creating web applications. It is a dynamic, server-side scripting language that runs on web servers and helps to create fully functional web applications. This language is preferred due to its ease of understanding and reliability. This is an open-force language that is modified to meet the needs of any website or business. Previously, it was referred to as Personalized Home Page, but now it is changed to Hypertext Preprocessor. Large websites like WordPress and Facebook use PHP language to manage, store and process their data.
Ruby is another programming language that is come into effect for web designing. Like PHP and Python, Ruby is also easy to understand and learn for beginners. It focuses more on compact coding that is reliable and adaptable. Ruby on Rails framework is an open-source web system that helps designers to create dynamic websites quickly with great effect. Ruby follows multiple approaches to perform the same function which is opposite to that of Python. Shopify is a good example of using this language.
Angular is an open-source Type-script based language. It is funded by Google and used in application development. Angular allows designers and developers to build applications that live on the web, mobile, or desktop. It is rewritten by the same team that built AngularJS. It claims a large collection of resources in the cloud and a very responsive community.
Dot Net (.NET) is a programming language that has numerous applications in the programming industry. It is a framework that can be used to design and develop a wide range of applications. There are no bounds when it comes to programming for different platforms. It supports many other languages like C#, VB.NET, C++, and F# to make websites more fluid and flexible on all devices. .NET is used to create the simplest programs to the most complex ones with a collection of predefined libraries.
In the End
As you know, programming and coding are changing rapidly in today’s world. The coders and programmers are experts in creating miracles on digital platforms. It should be noted that the old languages are still regarded in the industry due to their results and can be improved as per the needs. This ability to update has kept the languages still in the scenario. In all, regardless of the relevance of the language, as a programmer one has to learn web designing languages to develop the basic sense of coding to create more appealing websites for user engagement.
I am Pradnya Ahire working with AS Webworks as a Digital Marketing Intern. I graduated in Electronics Engineering. I was looking for a Digital Marketing Internship to have strong hands-on practical knowledge. I found AS Webworks on Indeed.com and got a way to restart my career in Digital Marketing. AS Webworks allows me to work on live projects to promote various businesses online.
To Read More Tech Blogs Visit: Technical Nick
Leave a Reply | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00575.warc.gz | CC-MAIN-2023-14 | 7,076 | 21 |
https://za.pinterest.com/pin/169588742190010200/ | code | A tiger mother lost her cubs from premature labor. Shortly after she became depressed, her health declined. So they wrapped up piglets in tiger cloth & gave them to the tiger. The tiger now loves these pigs & treats them like her babies. This is so sweet.
This Island In Japan Is Amazing. If You Love Cats, You Have To See This. | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806422.29/warc/CC-MAIN-20171121185236-20171121205236-00709.warc.gz | CC-MAIN-2017-47 | 328 | 2 |
https://www.sencha.com/forum/showthread.php?284197-Simple-Organization-Chart | code | Simple Organization Chart
Second my english isn't the best, so sorry my mistakes.
Back to business.
I have developed a component to render an organization chart by reading Ext sources, reusing many code as i could and overriding what I need
Here is some features:
I have teste it on IE8, IE10, Chrome 33, FireFox 28 with Ext 4.1.1 and 4.2.1
- Simple Ext.data.TreeStore backend
- Multiple selection
- Drag and drop with auto scroll and auto expansion
- Custom rendering by overriding renderItem method or itemTpl template
- Keyboard navigation (left, right, up, down) with auto expand, +/- expand/collapse
Any comment or bug report is welcome.
Looks cool. Please do post your code or an easy link. Thanks for sharing your work with the community.
Join me at SenchaCon 2016!
I´ve uploaded the code, so be my guest!
I´ve made some improvements and some more examples.
Have you made an Ext 5 version of this? | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00242-ip-10-185-217-139.ec2.internal.warc.gz | CC-MAIN-2016-22 | 906 | 17 |
http://botchat.website/baltimore/cheap-chatbot-baltimore-learn-more.html | code | Interestingly, the as-yet unnamed conversational agent is currently an open-source project, meaning that anyone can contribute to the development of the bot’s codebase. The project is still in its earlier stages, but has great potential to help scientists, researchers, and care teams better understand how Alzheimer’s disease affects the brain. A Russian version of the bot is already available, and an English version is expected at some point this year.
The “web-based” solution, which runs on a remote server, is generally able to be reached by the general public through a web page. It constitutes a web page with a chatbot embedded in it, and a text form is the sole interface between the user (you) and the chatbot. Any “upgrades” or improvements to the interface are solely the option and responsibility of the botmaster.
There are several defined conversational branches that the bots can take depending on what the user enters, but the primary goal of the app is to sell comic books and movie tickets. As a result, the conversations users can have with Star-Lord might feel a little forced. One aspect of the experience the app gets right, however, is the fact that the conversations users can have with the bot are interspersed with gorgeous, full-color artwork from Marvel’s comics.
Chatbots could be used as weapons on the social networks such as Twitter or Facebook. An entity or individuals could design create a countless number of chatbots to harass people. They could even try to track how successful their harassment is by using machine-learning-based methods to sharpen their strategies and counteract harassment detection tools.
Online chatbots save time and efforts by automating customer support. Gartner forecasts that by 2020, over 85% of customer interactions will be handled without a human. However, the opportunites provided by chatbot systems go far beyond giving responses to customers’ inquiries. They are also used for other business tasks, like collecting information about users, helping to organize meetings and reducing overhead costs. There is no wonder that size of the chatbot market is growing exponentially.
The first formal instantiation of a Turing Test for machine intelligence is a Loebner Prize and has been organized since 1991. In a typical setup, there are three areas: the computer area with typically 3-5 computers, each running a stand-alone version (i.e. not connected with the internet) of the participating chatbot, an area for the human judges, typically four persons, and another area for the ‘confederates’, typically 3-5 voluntary humans, dependent on the number of chatbot participants. The human judges, working on their own terminal separated from one another, engage in a conversation with a human or a computer through the terminal, not knowing whether they are connected to a computer or a human. Then, they simply start to interact. The organizing committee requires that conversations are restricted to a single topic. The task for the human judges is to recognize chatbot responses and distinguish them from conversations with humans. If the judges cannot reliably distinguish the chatbot from the human, the chatbot is said to have passed the test.
AIML, Artificial Intelligence Markup Language developed by Richard Wallace, constitutes an open standard for creating your own chat bot. AIML file consists of row-type, database-style data combined with hierarchical XML data in each response. This video shows one of spreadsheet-style editors for AIML, Simple AIML Editor (SAE) developed by Adeena Mignogna. The SAE allows botmasters to manage large AIML sets and then zoom in on the templates to edit the responses.
This chatbot aims to make medical diagnoses faster, easier, and more transparent for both patients and physicians – think of it like an intelligent version of WebMD that you can talk to. MedWhat is powered by a sophisticated machine learning system that offers increasingly accurate responses to user questions based on behaviors that it “learns” by interacting with human beings.
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published, which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise: | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00093.warc.gz | CC-MAIN-2019-43 | 4,940 | 9 |
https://connect.mozilla.org/t5/ideas/thunderbird-feature-request-remove-spaces-menu-icon-from-tab-bar/idc-p/22898 | code | Thank you from providing the ability to submit ideas. I'd like the see the ability to remove the Spaces menu icon from the tab bar by using the "Customize feature. I never use the Spaces feature and I'd like to remove the icon.
I understand this likely to be a low priority, but thank you for reading.
Thanks for submitting an idea to the Mozilla Connect community! Your idea is now open to votes (aka kudos) and comments.
While I agree with you, I think the next version of Thunderbird might so radically change the menu and toolbars that we are essentially forced to use the spaces bar.
There are design mockups here https://mzla-thunderbird.invisionapp.com/console/share/4CGY3FJ2WXQ/939370603
There is also a discussion here https://thunderbird.topicbox.com/groups/ux/T289d324313b78e81/thunderbird-114-unified-toolbar
I hope this does not mean that all the non-mailreader stuff will be forced upon users even more. TB is a fine mailreader. All the other stuff is plenty available in other apps and/or not needed at all for most users.
Mail is what I am working with all day. Please allow us to use TB as a mailreader only app! | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00322.warc.gz | CC-MAIN-2023-14 | 1,129 | 8 |
http://www.lispworks.com/documentation/lw70/CAPI-W/html/capi-w-82.htm | code | Here is a simple example of interface definition done with define-interface:
(define-interface demo ()
:text "Page Up")
:text "Page Down")
:text "Open File"))
'(page-up page-down open-file)))
(:default-initargs :title "Demo"))
An instance of this interface can be displayed as follows:
(display (make-instance 'demo))
At the moment the buttons do nothing, but they will eventually do the following:
Figure 10.1 A demonstration of a CAPI interface
Later on, we will specify callbacks for these buttons to provide this functionality.
(:default-initargs :title "Demo") part at the end is necessary to give the interface a title. If no title is given, the default name is "Untitled CAPI Interface".
Note: the define-interface form could be generated by the Interface Builder tool in the LispWorks IDE. See the LispWorks IDE User Guide for details. As the interface becomes more complex, you will find it more convenient to edit the definition by hand.
Examine the define-interface form to see how this interface was built. The first part of this form is shown below:
The interesting part of the define-interface form occurs after these
defclass-like preliminaries, where it lists the elements that define the interface's appearance. Here is the
:panes part of the definition:
:panes list specifies panes that are made when the interface is made. However it does not specify which panes are displayed: that is controlled dynamically by the interface's layout which may contain all, some or none of the panes in the
:panes list. The interface may also display other panes that are made explicitly, though this is less common.
The interface information supplied in this section is a series of specifications for panes and layouts. It could also specify menus and a menu bar. In this case, three buttons are defined. The layout chosen is a row layout, which displays the buttons side by side at the top of the pane.
CAPI User Guide and Reference Manual (Windows version) - 25 Feb 2015 | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578551739.43/warc/CC-MAIN-20190422095521-20190422121521-00553.warc.gz | CC-MAIN-2019-18 | 1,976 | 22 |
https://www.fr.freelancer.com/projects/php/sql-statements-javascrit-extraction-data/ | code | We are building a web that extract data from mysql, we do select with OR and AND contructs
The data must be displayed in the browser as a list
23 freelance font une offre moyenne de €5/heure pour ce travail
Hi, I have done Bachelors in Computer with more than 9 years of C++ development experience. I am OCP certified. Thank you for considering. Regards, Rashmi.
I have two years experience in full stack web development which also includes writing database queries and connecting mysql to the server. this is industrial experience. | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249550830.96/warc/CC-MAIN-20190223203317-20190223225317-00477.warc.gz | CC-MAIN-2019-09 | 534 | 5 |
https://www.oreilly.com/library/view/mcts-microsoft-windows/9780470948453/11_assessmenttext.html | code | - What filename extension is applied by default to custom consoles that are created for the MMC?
- You want to create roaming profiles for users in the Sales department. They frequently log on at computers in a central area. The profiles should be configured as mandatory and roaming profiles. Which users are able to manage mandatory profiles on Windows 7 computers?
- The user who uses the profile
- Server operators
- Power users
- You want to monitor the CPU, memory, and disk usage on your computer to ensure that there are no bottlenecks. Which MMC snap-in would you load to access System Monitor?
- System Monitor
- Reliability Monitor
- ActiveX Control
- Performance Logs and Alerts
- If you wanted to require ...
Get MCTS Microsoft® Windows® 7 Configuration Study Guide, 2nd Edition now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00476.warc.gz | CC-MAIN-2020-05 | 944 | 13 |
https://bmnotes.com/2019/06/05/swift-5-abi-stability-why-matters/ | code | Swift is fast, safe and expressive language to code with great full stack potential and community support. According to Apple, it’s 2.6 times faster than Objective C (As a Swift Developer I agree on this too 🙌). It’s 6th most loved language on StackOverflow.
As a developer, I hope sometimes you also felt that Swift is not mature enough as every year major changes are being introduced. One of the key problems articulated by a lot of developers is “Lack of backward compatibility” and you have to choose one Swift version for your Xcode projects (Version lock). Resulting in with a new Swift release you have to make changes in your existing code.
What is ABI?
ABI stands for Application Binary Interface. During runtime, Swift program interacts with other libraries through ABI (low-level set of protocols for memory representation, function calls, and access).
So when I say ABI not stable it means that each binary is building its own version of Swift Dynamic Library inside it. Now the Swift 5 is ABI Stable which means if I build two apps, one using Swift 5.0 and another using Swift 5.2, both will use Swift ABI embedded in the iOS Operating System.
Why ABI Stability matters a lot?
- Compatibility: which means new compilers can compile your old Swift code which means no more migration pain.
- Bundle Size: It will reduce the build size because of no Swift standard Library in your Framework folder.
- Binary framework and runtime compatibility which enables the framework distribution as a binary which will work across multiple Swift versions.
Conclusion: Swift is the future for Apple ecosystem and now it’s ABI stable which means, in other words, it is like write once and use everywhere. | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00305.warc.gz | CC-MAIN-2022-05 | 1,715 | 10 |
https://github.com/memcached/memcached/wiki/DevelopmentRepos | code | Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Clone this wiki locally
If you are developing, please CC the mailing list on any pull requests, or simply post the code on the mailing list. There is a much wider audience there.
Memcached can be a daunting project to get into. The codebase was started in 2003, and it is a stable long term project with many users. It is nearly impossible to "clean up" or deprecate odd looking old bits, and new features must have very high performance characteristics.
A big, probably easiest, contribution is to simply run newer versions.
For the last few years, an option
-o modern has existed which flips feature
flags intended to become future defaults. While we try very hard to ensure the
stability of these new features, they are gated while we look for community
This feedback often takes a long time to come in. If you run a cluster of memcached servers and wish to contribute back, keep an eye on new releases and run just a single instance on newer code. If you do this once a quarter, or once a half, or simply ask the mailing list when a good time would be, you can make a huge difference.
For code contributions; reading code, running tests, running mc-crusher, reading commits, and submitting typo fixes are a great place to start. You should already be familiar with C. You should also study libevent-style code and multithreading basics. Keeping up with community and maintainer PR's is also essential for understanding the direction of the codebase.
If you're looking for something to do, please ask the mailing list and give some hint as to your ability or what kind of work you are looking for. There are plenty of helpful tasks that have a lower barrier of entry.
master tree in the central repo should always contain runnable, high quality code. We go out of our way to ensure nothing goes to the central repo without a barrage of regression tests and code reviews.
next tree is the integration branch for the next proposed release. As
PR's are tested/reviewed they are committed into this branch. If you are doing
feature work, please fork from and issue PR's against
next if possible.
Releases are tested using a slowly-increasing set of buildbots: http://build.memcached.org:8010/ - due to costs, the buildbots are only running during active development.
A little odd, but we expect that tests have been run before PR's are opened. While this isn't always true for the full buildbot run, no commits should be up for discussion if they do not pass the test suite. This is different in comparison to projects which run CI against PR's.
Performance regression tests are run on hosts graciously donated by https://www.packet.net/ - large many-core, NUMA, 144G+ RAM, 10g networking. Each change is measured carefully to avoid performance and stability problems. Some tests are run for many hours, even!
Performance tests are managed using mc-crusher: https://github.com/dormando/mc-crusher - it's very simple to get the tool running locally for testing, and you can still measure performance changes on smaller hosts. For instance, I do first-pass tests using a dualcore broadwell NUC.
Any active contributor is welcome to share our development infrastructure.
Portability. Our old buildbot network included several solaris hosts and a few BSD types. As of this writing the BSD's are still missing but FreeBSD and OpenBSD will be added back in. Memcached is shipped as part of a standard package on many operating systems and it is important to validate portable builds, even on ARM hosts!
Central Github Repo
Web View: https://github.com/memcached/memcached
git clone git://github.com/memcached/memcached.git
We are actively developing against the 1.4 tree.
The 1.4 tree is the "master" branch.
- dormando: git://github.com/dormando/memcached.git | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596336.96/warc/CC-MAIN-20180723110342-20180723130342-00300.warc.gz | CC-MAIN-2018-30 | 3,911 | 30 |
https://www.contest.sk/en/products/interpretation-technology/interpreter-console-for-two-interpreters-td-310-iso/ | code | Interpreter Console for Two Interpreters TD 310 ISO
The interpreter console is part of the equipment of an interpreter booth. It is meant to be used by two interpreters for a single language, allowing them to swap while working.
All control components are located on the top panel, allowing for comfortable management and good overview of the current settings. They are divided into the transmitting and listening part.
The output channel switch ("output select") can be used for switching the signal from one interpreter to one of the two output channels.
The left and right audio channel (corresponding to the left and right headphone) are completely independent. Each interpreter is able to choose an arbitrary track to listen to without interfering with another interpreter or the output signal from the console.
The input volume is adjustable, as is the sound color using high and low frequency correction. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00661.warc.gz | CC-MAIN-2023-14 | 911 | 6 |
http://tecfa.unige.ch/guides/xml/frame-sgml/sdocbook/ | code | Scroll down please and read the text at the end.
MY "Simplified" DocBook XML DTD for FrameMaker + SGML
Part of my let's struggle with "FrameMaker+SGML 6" and XML series.
IMPORTANT if you are new to the topic you really should read my little
"XML with FrameMaker + SGML Quick Guide" (or better if you can find anything):
This is an unfinished project started on March 19 2001.
Trying to import Norman Walsh's Simplified Docbook XML DTD into FrameMaker + SGML 6.
Importing the sdocbook.dtd almost works. Framemaker doesn't read unicode, therefore all character entity declarations will fail. This concerns:
Since I am no FrameMaker or SGML expert I don't know how to fix this. Maybe read/write
rules could do the trick.
I did not spend more than 10 minutes on the problem, but just ripped out
the lines you can see in the mysdocbook.diff file.
Fixing this is not a #1 priority, since Frame generates something that does
look like correct entities (needs verification, fact that Mozilla shows it is not enough).
- sdbcent.mod and files in the ent/ directory
- You can import mysdocbook.dtd into FrameMaker + SGML. The result will be something like
Don't forget to change "Valid as highest-level element" from title to article !
- mysdocbook-edd.fm has also formatting rules. Sort
of a feasability test if you like.
WARNING: So this EDD isn't really complete. Will do better when/if I decide
to use this DTD for real. Writing EDD rules is really rough work if you do it just
casually. It's not my idea of fun. Anyhow I actually did write
a text (on which I will improve until the end of summer 2001): http://tecfa.unige.ch/tecfa/talks/schneide/tie-talk01/internet-uni01.pdf. My major struggle were figures, can't have them floating and have a DTD conformant title element :(
- An example file is: sdocbook-test.fm. Generated xml is sdocbook-test.xml.
Note that the output (on march 21) is almost valid. Only imagedata has wrong attributes, and another attribute (cols) is missing from tgroup. Both problems could be fixed with xslt post-processing, but maybe I should try to generate correct XML from the start with r/w rules.
The CSS stylesheet has not been adapted for real use yet (and maybe never will, at least before I see some CSS2 capable browsers). You can grasp some contents from Mozilla or maybe even IE explorer.
- XSLT->XHTML stylesheet: Should do this next.
- Note: No plans for XSL/FO since the point of using FrameMaker is precisely that it can do PDF (and PS) natively and quite well so. Look at the *.pdf files for sdocbook-test.
Final note: If you try to include the full XML Docbook DTD, FrameMaker + SGML 6 will crash on both Solaris and Windows 2000 (at least the versions I have). Don't http://docbook.org/xml/ or tell me how :)
Last modified: Sat May 26 23:20:09 MEST 2001 | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523476.97/warc/CC-MAIN-20200607013327-20200607043327-00085.warc.gz | CC-MAIN-2020-24 | 2,793 | 30 |
https://dash.dsv.su.se/2012/04/25/the-healthcare-analytics-and-modeling-group-had-three-papers-accepted-to-lrec-2012-in-istanbul-and-to-the-third-workshop-on-building-and-evaluating-resources-for-biomedical-text-mining-biotxtm-2012/ | code | Three papers to LREC 2012 in Istanbul
The Healthcare analytics and modeling group had three papers accepted to LREC 2012 in Istanbul and to the Third Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2012) held in conjunction with LREC 2012.
The accepted papers are:
– Pseudonymisation of personal names and other PHIs in an annotated clinical Swedish corpus by Alyaa Alfalahi, Sara Brissman and Hercules Dalianis.
– Releasing a Swedish clinical corpus after removing all words – de-identification experiments with conditional random fields and random forests by Hercules Dalianis and Henrik Boström.
– Rule-based Entity Recognition and Coverage of SNOMED CT in Swedish Clinical Text by Maria Skeppstedt, Mia Kvist and Hercules Dalianis.
Maria Skeppstedt, Hercules Dalianis and me are going to Istanbul to present the papers.
About Alyaa Alfalahi
Hi, My name is Alyaa Alfalahi. I have studied the masters degree (civ.ing) in computer science at KTH. In the third and fourth year of my education, I have studied information and database technology at DSV. In my master thesis
I worked with the annotated patient record, Stockholm EPR Corpus, and I created an algorithm to replace the real name with another real name , which is called pseudonymisation. Now I have been working as project engineer for a project (High-Performance Data Mining for Drug Effect Detection) of the Healtcare analytics and modeling research group. My task is to build up a journal database system from the raw data from Karolinska that is contained in Take Care health record system that will be used in this project. I also have been working as course assistant on web mining course. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657169.98/warc/CC-MAIN-20230610095459-20230610125459-00614.warc.gz | CC-MAIN-2023-23 | 1,700 | 10 |
https://rockfishfoundation.org/bluehost-corporate-pricing/ | code | Bluehost Corporate Pricing
Discovering a premium affordable host service provider isn’t simple. Every site will certainly have various requirements from a host. Plus, you have to contrast all the functions of a holding company, all while looking for the best offer possible.
This can be a lot to sort via, especially if this is your first time acquiring hosting, or developing a web site.
The majority of hosts will certainly use extremely affordable initial prices, only to increase those rates 2 or 3 times greater once your preliminary contact is up. Some hosts will supply totally free bonus offers when you join, such as a free domain name, or a free SSL certificate.
While some hosts will certainly be able to use far better efficiency as well as high degrees of security. Bluehost Corporate Pricing
Listed below we dive deep into the very best affordable host plan there. You’ll learn what core organizing functions are crucial in a host and just how to assess your own hosting demands so that you can select from one of the most effective cheap holding service providers listed below.
Disclosure: When you purchase a webhosting plan via web links on this page, we make some compensation. This aids us to keep this site running. There are no added costs to you in all by using our links. The listed here is of the best low-cost host bundles that I’ve personally used as well as examined.
What We Think about To Be Inexpensive Web Hosting
When we describe a web hosting bundle as being “Affordable” or “Budget plan” what we suggest is hosting that comes under the cost bracket in between $0.80 to $4 each month. Whilst looking into low-cost organizing service providers for this guide, we checked out over 100 different hosts that fell into that rate array. We then analyzed the quality of their cheapest hosting bundle, worth for money and also client service.
In this article, I’ll be reviewing this world-class internet site holding firm and also stick in as much pertinent information as feasible.
I’ll look at the attributes, the pricing alternatives, and also anything else I can consider that I assume could be of benefit, if you’re determining to register to Bluhost and also get your internet sites up and running.
So without further ado, allow’s check it out.
Bluehost is one of the largest webhosting companies worldwide, obtaining both massive marketing support from the firm itself and affiliate online marketers who advertise it.
It truly is a huge business, that has actually been around for a long time, has a big reputation, and is most definitely one of the top choices when it concerns webhosting (absolutely within the leading 3, at least in my book).
However what is it exactly, and also should you obtain its solutions?
Today, I will certainly answer all there is you need to recognize, given that you are a blog writer or a business owner that is searching for a webhosting, as well as does not know where to begin, since it’s a fantastic service for that target market generally.
Allow’s think of, you intend to hold your websites as well as make them visible. Okay?
You already have your domain (which is your site location or LINK) but now you intend to “turn the lights on”. Bluehost Corporate Pricing
You require some organizing…
To accomplish all of this, and also to make your internet site noticeable, you need what is called a “web server”. A server is a black box, or gadget, that stores all your internet site information (files such as photos, messages, video clips, web links, plugins, and various other details).
Now, this server, has to be on regularly and it has to be connected to the internet 100% of the moment (I’ll be stating something called “downtime” later).
Furthermore, it also needs (without obtaining also elegant and right into details) a file transfer protocol typically known as FTP, so it can show web internet browsers your web site in its intended form.
All these things are either expensive, or require a high degree of technical ability (or both), to develop as well as maintain. As well as you can completely head out there as well as discover these points on your own and established them up … yet what concerning instead of you acquiring as well as keeping one … why not just “leasing organizing” rather?
This is where Bluehost is available in. You rent their web servers (called Shared Hosting) and you release an internet site making use of those web servers.
Considering that Bluehost keeps all your files, the business likewise allows you to set up your content administration systems (CMS, for brief) such as WordPress for you. WordPress is a very popular CMS … so it simply makes sense to have that option readily available (practically every hosting business now has this alternative also).
In other words, you no more need to set-up a web server and after that integrate a software program where you can construct your material, separately. It is already rolled into one plan.
Well … picture if your web server is in your house. If anything were to take place to it in any way, all your documents are gone. If something goes wrong with its inner procedures, you need a specialist to repair it. If something overheats, or breaks down or obtains damaged … that’s no good!
Bluehost takes all these hassles away, as well as takes care of everything technical: Pay your web server “lease”, as well as they will deal with every little thing. And once you buy the service, you can after that start concentrating on adding content to your site, or you can place your effort into your marketing campaigns.
What Provider Do You Obtain From Bluehost?
Bluehost offers a myriad of various services, however the key one is hosting obviously.
The organizing itself, is of various types incidentally. You can rent a common web server, have a dedicated web server, or additionally an onlineprivate web server.
For the function of this Bluehost review, we will concentrate on holding services as well as other services, that a blog owner or an on the internet entrepreneur would certainly need, instead of go too deep into the rabbit opening as well as discuss the other solutions, that are targeted at more experienced individuals.
- WordPress, WordPress PRO, as well as shopping— these holding solutions are the bundles that enable you to host an internet site utilizing WordPress as well as WooCommerce (the latter of which permits you to do e-commerce). After purchasing any of these plans, you can begin building your site with WordPress as your CMS.
- Domain name Marketplace— you can additionally buy your domain from Bluehost rather than other domain registrars. Doing so will certainly make it much easier to aim your domain name to your host’s name servers, given that you’re making use of the very same market.
- Email— as soon as you have actually acquired your domain name, it makes good sense to additionally obtain an email address connected to it. As a blog writer or on-line entrepreneur, you ought to virtually never ever use a complimentary email service, like Yahoo! or Gmail. An email similar to this makes you look unprofessional. Fortunately, Bluehost provides you one for free with your domain name.
Bluehost additionally uses specialized web servers.
As well as you may be asking …” What is a devoted web server anyway?”.
Well, the important things is, the basic webhosting bundles of Bluehost can only so much traffic for your website, after which you’ll need to update your hosting. The reason being is that the usual servers, are shared.
What this indicates is that a person server can be servicing two or more sites, at the same time, among which can be your own.
What does this mean for you?
It means that the solitary web server’s sources are shared, and it is doing numerous jobs at any given time. As soon as your website begins to hit 100,000 website visits every month, you are mosting likely to require a committed web server which you can additionally obtain from Bluehost for a minimum of $79.99 per month.
This is not something yous needs to bother with when you’re starting yet you should keep it in mind for sure.
Bluehost Pricing: How Much Does It Expense?
In this Bluehost evaluation, I’ll be concentrating my focus mainly on the Bluehost WordPress Hosting bundles, given that it’s one of the most prominent one, and also very likely the one that you’re trying to find and that will fit you the very best (unless you’re a big brand, firm or website).
The three readily available plans, are as follows:
- Fundamental Plan– $2.95 per month/ $7.99 regular cost
- Plus Plan– $5.45 monthly/ $10.99 normal rate
- Option And Also Strategy– $5.45 per month/ $14.99 routine cost
The very first rate you see is the price you pay upon join, and also the 2nd price is what the price is, after the first year of being with the business.
So basically, Bluehost is mosting likely to charge you on an annual basis. And also you can likewise select the amount of years you intend to hold your site on them with. Bluehost Corporate Pricing
If you choose the Standard strategy, you will certainly pay $2.95 x 12 = $35.40 beginning today and also by the time you enter your 13th month, you will certainly currently pay $7.99 monthly, which is additionally billed per year. If that makes any type of feeling.
If you are serious about your website, you should 100% get the three-year option. This means that for the fundamental plan, you will certainly pay $2.95 x 36 months = $106.2.
By the time you strike your fourth year, that is the only time you will pay $7.99 per month. If you think about it, this strategy will certainly save you $120 during three years. It’s not much, but it’s still something.
If you wish to get more than one website (which I very advise, and if you’re major, you’ll probably be obtaining even more eventually in time) you’ll intend to make use of the choice plus plan. It’ll permit you to host endless internet sites.
What Does Each Strategy Offer?
So, when it comes to WordPress hosting plans (which are similar to the shared hosting plans, but are extra tailored in the direction of WordPress, which is what we’ll be focusing on) the attributes are as complies with:
For the Basic strategy, you get:
- One website only
- Safe website through SSL certification
- Optimum of 50GB of storage space
- Cost-free domain name for a year
- $ 200 advertising and marketing credit
Keep in mind that the domains are acquired separately from the organizing. You can obtain a totally free domain with Bluehost right here.
For both the Bluehost Plus hosting as well as Choice Plus, you obtain the following:
- Unlimited number of websites
- Free SSL Certification. Bluehost Corporate Pricing
- No storage space or bandwidth limit
- Complimentary domain for one year
- $ 200 advertising credit scores
- 1 Office 365 Mailbox that is totally free for 1 month
The Choice Plus plan has an included advantage of Code Guard Basic Back-up, a back-up system where your file is conserved as well as duplicated. If any kind of accident takes place and your site data vanishes, you can recover it to its original kind with this attribute.
Notice that despite the fact that both plans set you back the exact same, the Choice Strategy after that defaults to $14.99 per month, regular price, after the set amount of years you have actually picked.
What Are The Advantages Of Using Bluehost
So, why choose Bluehost over other webhosting services? There are thousands of web hosts, a number of which are resellers, however Bluehost is one select few that have stood the test of time, as well as it’s most likely one of the most popular out there (and for good factors).
Below are the three primary advantages of selecting Bluehost as your host company:
- Web server uptime— your web site will certainly not show up if your host is down; Bluehost has greater than 99% uptime. This is very important when it pertains to Google Search Engine Optimization and rankings. The higher the far better.
- Bluehost rate— exactly how your web server response determines exactly how fast your internet site reveals on a web browser; Bluehost is lighting quick, which suggests you will lower your bounce rate. Albeit not the most effective when it comes to packing rate it’s still hugely essential to have a quick rate, to make user experience much better and also much better your ranking.
- Unrestricted storage space— if you get the Plus strategy, you need not worry about how many data you save such as videos– your storage ability is endless. This is actually important, due to the fact that you’ll most likely run into some storage issues later on down the tracks, and you don’t desire this to be an inconvenience … ever.
Finally, customer assistance is 24/7, which suggests no matter where you are in the globe, you can speak to the support group to fix your site problems. Pretty typical nowadays, but we’re taking this for provided … it’s additionally very crucial. Bluehost Corporate Pricing
Additionally, if you have actually obtained a free domain name with them, after that there will be a $15.99 charge that will be deducted from the quantity you initially acquired (I picture this is due to the fact that it type of takes the “domain out of the market”, not exactly sure regarding this, however there possibly is a hard-cost for registering it).
Lastly, any kind of demands after 1 month for a refund … are void (although in all sincerity … they ought to most likely be stringent below).
So as you see, this isn’t always a “no doubt asked” policy, like with some of the various other organizing choices available, so make sure you’re alright with the policies before continuing with the organizing. | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00547.warc.gz | CC-MAIN-2021-49 | 13,831 | 82 |
https://forum.zenphoto.org/discussion/1409975/how-to-move-the-default-albums-folder | code | The simpler media website CMS
My ZenPhoto is installed in the folder:
while the default ZenPhoto albums are located in folder:
Is it possible to move the album folder to the same level as the installation folder?
Thanks for any kind of help! | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831334.97/warc/CC-MAIN-20181219045716-20181219071211-00042.warc.gz | CC-MAIN-2018-51 | 241 | 5 |
https://www.computing.net/answers/hardware/desktop-does-not-boot-and-crashes-frequently/83743.html | code | I am facing two problems on my Dell Inspiron 560s (possibly related)
1. The computer sometimes does not boot. Infact it does not even get to the BIOS screen. Once I switch on the power button, it just stays there - no BIOS, no POST nothing - Just a blank screen.
2. I got past this problem by unseating/reseating the memory cards. System boots up, but relapses into the same behavior as point# 1 in probably 10-15 days time.
3. After the memory reseat process, when I boot up Win 7, it frequently freezes/crashes with a video scheduler error. The system has an ATI Radeon card for which I reinstalled the drivers. As a last resort I have even reinstalled the system. However the crash/freeze cycle continues, even after the reinstall.
Are these two problems related? Does it indicate there is something wrong with the memory or the video card? Any help is greatly appreciated.
My system specs are as follows:
Model: Dell Inspiron 560S
Processor: Intel Core2 Duo
Memory: 2GB DDR3
Video Card: ATI Radeon | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00260-ip-10-171-10-70.ec2.internal.warc.gz | CC-MAIN-2017-04 | 1,001 | 10 |
https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Discussions/Python-Tool-Downloading-Qualtrics-Survey-Data-using-Python-API/m-p/505994 | code | I am thrilled to post this "one tool wonder" made possible by the new Python Tool in Alteryx Designer 2018.3.
Thank you to @IanCo, our Solutions Consultant, for steering me in this direction.
This post will show you how you can use the new Alteryx Designer Python Tool to implement the Qualtrics Python 3 API calls to download your survey data. (There is a template workflow attached below that includes all of the Qualtrics Python 3 code mentioned in this post for you to customize as follows.)
Open a new workflow and drag the Python Tool onto your canvas.
Add the Qualtrics Python 3 code for the API calls just below the Python line of code that says "from ayx import Alteryx". The full code is at the bottom below so that you may cut-and-paste it into your Python Tool. (I found this Python 3 code on the Qualtrics web site: https://api.qualtrics.com/docs/response-exports. Make sure you select the tab for Python 3.)
Modify the following three lines of the Qualtrics Python 3 code for your particular Qualtrics token, survey and data center values.
apiToken = "YOUR_API_TOKEN_HERE"
surveyId = "YOUR_SURVEY_ID_HERE"
datacenter = 'YOUR_DATA_CENTER_ID_HERE"
Qualtrics provides some help for finding these values on the following web sites:
Change the following line from "30" to "-1" to disable the timeout -- The time to wait (in seconds) for output from executions.
timeout = Integer(-1, allow_none=True, ...
Run your workflow. The returned CSV file will be in the location that you entered above. There is nothing that comes into your workflow canvas. You can then use the Input Tool to read in the CSV file to a workflow.
# Step 2: Checking on Data Export Progress and waiting until export is ready while requestCheckProgress < 100 and progressStatus is not "complete": requestCheckUrl = baseUrl + progressId requestCheckResponse = requests.request("GET", requestCheckUrl, headers=headers) requestCheckProgress = requestCheckResponse.json()["result"]["percentComplete"] print("Download is " + str(requestCheckProgress) + " complete")
I'm wondering if anyone has updated this code with the new Python 3 code provided, and now required, by Qualtrics. Aside from being able to spell it, I know nothing about Python other than I could copy, paste, and edit the code provided earlier to use with our own surveys. Could not be more thankful for those involved in the initial code! Could use some help if anyone is able to take the new code set and apply needed settings as I am getting nothing but very generic errors. Here is the link to the new code provided by Qualtrics if it's helpful.
I've tried updating the APIKEY, the datacenter (which is the same) and the SurveyID, but I get errors still that mean nothing to me, so I'm still stuck. Thanks for taking a look, Heather. Sure wish you still worked with this!
I've decided to post my full code here to see if anyone is able to ascertain where the error might be in my syntax. I have replaced the APIKey and surveyID with xxxxxxxx, however any punctuation such as single or double quotes are exactly as they appear in my code. My previous post shows the error being rendered. All of this is happening via the use of the Python tool contained within the Developer tool set. I'm guessing this is something very simple but again, I don't code python at all. Based on the rendered error, it would appear that I need to define the environment variable but I don't know how to do this.
Thanks ahead of time to anyone that can help! This is big for our organization!
@gene_denny I would consider filing a support ticket on this one, as the error makes me think it's possible that the Jupyter notebook interface is not interacting well ... maybe not closing the code execution session??? | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510259.52/warc/CC-MAIN-20230927035329-20230927065329-00185.warc.gz | CC-MAIN-2023-40 | 3,735 | 19 |
https://thomaspetty.com/how-to-repair-a-hacked-wordpress-website-a-case-study/ | code | Every once in a while, someone comes to me with a hacked website needing someone to fix it. A former client of mine for whom we’d built a WordPress site a few years ago was having problems. We hadn’t supported their site in three or four years (they wanted to do it themselves) and we didn’t host their site.
If you looked in Google, you could see that Google had flagged the site as having malware. Not cool:
If I tried to go to the site, you got this awful red screen warning you away. Their clients were seeing the same screen, and it wasn’t too good for their reputation.
“Sure no problem, we can take a look.” I quoted them 5-10 hours of work, because you never know what you’re going to find when you get in there.
Why WordPress Is Easy To Hack (Usually)
WordPress is a favorite of hackers because:
- It’s so easy to set up with the default settings that aren’t necessarily very secure.
- So many websites have the default user id: “admin”. (ALWAYS change this id or delete it!)
- The site is often set up and managed by volunteers or company employees who don’t know anything about web security.
- There are a ton of targets – I read somewhere that a whopping 57% of all websites in the world are built on WordPress. If one’s too hard to get into, there are plenty others to go after.
The easy setup part means that typically there are some holes left that can be exploited. Older versions of WordPress have vulnerabilities that have been patched in later versions, and if sites haven’t been upgraded, then hackers know how to exploit those problems.
If you have a WordPress site, you can easily tell the version by going to www.yourwebaddress.com/readme.html. Now I know your version, and (if I were a hacker) instantly know what to do to break down the door.
Another problem is insecure passwords. To further compound this, using an admin-level user id to post blog posts means that a hacker now knows your admin account, and just has to guess or brute force your password. Look at your blog posts, and you’ll probably see an author link at the top or bottom. Click it, and now you know the user id under which it was published.
I tell my clients to NEVER use an admin-level user id to post blog posts. Use an editor- or author-level account, and only use an admin-level for admin stuff.
Determining What Got Hacked
So getting back to the hacked site, I did not want to log into the site from the front end, because clearly it had a payload that could install on my computer. It might redirect me to someplace nasty, try to install a virus on my computer, or install malware that captures passwords. So I definitely don’t want to go in the front door, because who knows what kinds of booby traps were set?
I connected to the server with FTP and looked around. I was specifically looking for folders in the /wp-content directory that looked suspicious or weren’t the “regular” stuff like photos, and PDF files. Things like EXE files, ZIP files, or other things that might have a payload. Didn’t find anything there, so that’s good.
So I pulled the entire site down to my computer. If there was anything infected, my Norton would have flagged it and prevented it, but I wanted to be careful.
Next I pulled up phpMyAdmin and logged into the database. I wanted to see if there were any unusual user ids or other entries that didn’t look right. If someone’s user id had been compromised, it’s possible that a hacker could have set up additional user ids. Didn’t see anything out of the ordinary there either, so that was good.
I exported the full database and downloaded the SQL file to my computer.
Finally, I started looking at the individual files, especially the core PHP files. Aha! I found three files that had been changed on the same date and time, that should have had dates matching the rest of the site. In looking at the PHP files, there was code that had been injected which definitely was hacker code. It’s pretty easy to spot when you see it. Fortunately, of the three files, two were core WordPress files, and only one was from the site theme.
Repairing A Hacked Website
So I decided to set up a brand new, fresh install of WordPress, and only move the content over – after removing the hacker code from the one theme file of course. All plug-ins would get a fresh install too just to be sure (because the the database has all the configuration information, that would carry over without a problem).
I started with a blank slate, installed a fresh copy of WordPress, using the latest code (version 4.2.4) and copied just the wp-content/uploads and the one theme folder over (no plug-ins!). I used phpMyAdmin to import the database file, and changed the wp-config.php file to point to the new database. After I installed all the old plug-ins (fresh from wordpress.org), I fired up the website.
Everything seemed to be working.
Next, I installed and configured the iThemes Security Pro plug-in, which is my go-to for WordPress security.
The last step was to go to Google Search Console (formerly Webmaster Tools), and request that they review the website. Check the checkbox at the bottom, and you have to explain what you did to fix it. I doubt anyone physically reads it, but it’s good to document it.
Within a few hours, they had declared the website free of malware, and by the next day, the red warning screen was gone. Yay.
Putting “Hacker Insurance” in Place
I’m a big believer or prevention is a better policy than after-the fact repair, which can get very expensive very quickly. These are my suggestions to prevent or at least minimize your vulnerabilities:
Use a high quality hosting company – don’t go cheap. You get what you pay for, and cheap isn’t always a good investment. I’m a huge fan of WPEngine, and am actively moving all my WordPress clients over to them. They manage brute force attacks directly, and even disallow unsafe plug-ins. If you move a site over with a disallowed plug-in, they warn you, and within 7 days will remove it if you don’t first. Their tech support is top-notch, and I can’t say enough good about them.
Install anti-hacker software and configure it properly (see my previous article: How to Hacker-Proof a WordPress Website).
Keep a nightly backup of both the database and the WordPress files. WPEngine does this automatically without any additional plug-in. If you need to do an upgrade or install plug-ins, do an ad hoc “Backup Point”, and you can instantly roll anything back that pukes. If someone gets in and does bad things to your website, you can push it back with a click of the mouse. If you don’t use WPEngine, Backup Buddy (also from iThemes) is a good plug-in to use too, and allows you to restore your website pretty easily.
I don’t think you can ever be 100% sure of being hack-proof, but you can certainly make yourself much less of a target. If it’s too hard to get in, they’ll just go somewhere else and beat on someone else’s door. Good luck! It’s a jungle out there. | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314852.37/warc/CC-MAIN-20190819160107-20190819182107-00439.warc.gz | CC-MAIN-2019-35 | 7,041 | 34 |
https://www.freelancer.com/job-search/export-data-mysql-database-xls-php/4/ | code | I need an XSL template creating for Magento xtento order export extension. It needs to be in CSV format: 1) Text delimiter: “ (double quote) 2) Field delimiter: , (comma) 3) Record (line) delimiter: CRLF (ASCII chars 13 & 10) One line per order line. Fields as follows: 1) Unique ID of order. 2) Account Code (I think this is
...users to connect to an Oracle database to export data as an Excel tab-delimited file. The contents of the file will be modified by the user and subsequently require to import the file back into the Oracle database. Looking for a skilled developer with the following skills: - strong Java developer - Oracle database - Weblogic - Spring framework
Need a plugin for WooCommerce that activates an export function which takes order data and compiles into a fixed length file (.txt) and uploads to a desired FTP(or SFTP). Visual parts: - "button" on each order in the WooCommerce Orders panel, which activates the function - Settings page for the plugin where specific parts of the .txt-file can be
...help conducting market research in Europe for exporting Mexican pottery from a small business: talavera. I need specific information about which country in Europe is best to export to, a foreign market analysis for this country and market entry strategies (including logistics). Some practical financial information about exporting the product would be
Automate a CSV file to export hourly from our Inventory Management system to customers FTP server. We use Trade Gecko software with API access. Wording from our customer; Do you have anyone with IT or technical capabilities within the business who would be able to automate the inventory feed in csv format? Essentially to automate the exchange
In the following text, I have given destination cell in the begining as A1, A2. A1 Consider a demand curve which takes the form of a straight-line cutting both axes. Elasticity at the mid-point of the line would be Codes: I II III IV A2 0 A4 1.S B1 Price taker firms A3 1.0 A5 2.0 E1 Opportunity costs are also known as B2Advertise ...
I have 3000+ items to upload to my existing amazon account. I looking for someome that is able to make it with a xls or csv file I have many data (image, description, price, quantity...). Look attachment ([url removed, login to view]). But i'm not able to make a file, with it, compatible with amazon requirement. In more i haven't any Sku but for
I have a website that I am building and need help. My MLS uses a software called RETS connector to export real estate listings into wordpress by using a CSV using Wp All import, after that it goes into the real estate 7 theme on wordpress automatically. I need someone that knows this software or can find an alternative solution and knows how to import
Hi. Our company is working on 3D morphed shirt customize tool and we need 3D modeler who can help us to fix 3D morphed models. Some guy did work on the project but he's no more available, so we want someone who can help us to modify/update the model. You must know how you can generate the model into js or json since we are working with that. I know there are some plugins that can generate...
We are looking for a UI/UX Designer. Could lead to Full Time Work based on performance. We have a responsive web app (uses Material Design) that has been customized from a theme. You will be given access to the theme assets and your job will be to design new components that can be easily built either by our developers. Assets are in psd files mainly but feel free to convert those to any other t...
I need some changes to an existing magento2.
...enter their data using several searchable fields. Other Vendors/Subscibers can search overall database to retrieve Reports contributed by other Vendors. Must be able to merge data into Report Forms for export into Microsoft Word to provide to clients. Also need ability for individual Subscribers to keep personal billing records to export Invoices. This
I have a well-made Rhino 3D model file and a map file that you need to import it into Unity and refer to the demo of the video demo that can run iOS and Android apps.
Export IQFeed/CQG real time market data (BID & ASK) to .TXT / .CSV file formate. I already subscribe to IQFeed package.
I need a developer who knows very good the JEvents component of Joomla. The task is to add a PDF export from the Jevents filter module. The module allows to search by event categories, locations, and date range. I would like to add an PDF export from these results. | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823350.23/warc/CC-MAIN-20171019160040-20171019180040-00824.warc.gz | CC-MAIN-2017-43 | 4,515 | 15 |
https://lists.pidgin.im/pipermail/cabal/2006-October/000024.html | code | [Cabal] FAQ and support.pidgin.im
lschiere at users.sf.net
Mon Oct 30 22:00:38 EST 2006
On Mon, Oct 30, 2006 at 08:29:22PM -0500, Luke Schierer wrote:
> I have started using faqomatic at
> Ideally this would show up as http://support.pidgin.im/faq, but I am
> unsure how to do that.
> It is not perfect, it does not have a secure way of authenitcating
> yourself with it, and I do not see its user management, though I do see
> that it does have a concept of permissions for different types of users.
> Given it is not perfect, I am open to other suggestions. It does
> however fill many of my basic desires, items can be categorized and
> moved easily, and they can be atomically edited. They can also be
> removed relatively easily, and you can add to the faq without
> manipulating a large text file.
> Naturally it needs work to make it Look nice. Sean goesso far as to
> place it in his top 3 or 4 worst looking sites ever.
> Let me know what you all think. Is there something better I should be
> looking at? Does the lack of good auth prove to be an unacceptable
I've put a handful of questions up in both the faqomatic faq and a few
less in the trac wiki. The latter lacks any sort of backward
navigation, the reason being that I'd have to create such navigation by
hand (and edit it if I move a question). If we decide to go with the
wiki version, I'll (eventually, if someone does not beat me to it), go
back trough and finish.
More information about the Cabal | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100602.36/warc/CC-MAIN-20231206162528-20231206192528-00282.warc.gz | CC-MAIN-2023-50 | 1,470 | 26 |
https://www.simbel.com/training/the-complete-2022-web-development-bootcamp-by-udemy | code | We'll take you step-by-step through engaging video tutorials and teach you everything you need to know to succeed as a web developer.
The course includes over 55 hours of HD video tutorials and builds your programming knowledge while making real-world websites and web apps.
Throughout this comprehensive course, we cover a massive amount of tools and technologies, including :
Front-End Web Development
Bash Command Line
Git, GitHub and Version Control
Backend Web Development
Deployment with GitHub Pages, Heroku and MongoDB Atlas
By the end of this course, you will be fluently programming and be ready to make any website you can dream of.
You'll also build a portfolio of over 25+ websites that you can show off to any potential employer.
Sign up today, and look forward to:
Animated Video Lectures
Code Challenges and Coding Exercises
Beautiful Real-World Projects
Quizzes & Practice Tests
Downloadable Programming Resources and Cheatsheets
Our best selling 12 Rules to Learn to Code eBook
$12,000+ worth of web development bootcamp course materials and course curriculum | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00318.warc.gz | CC-MAIN-2023-50 | 1,077 | 18 |
https://forum.camunda.org/t/formio-plugin-for-camunda-tasklist/22215 | code | Been working on a little but I would say valuable plugin for Tasklist: Simple, easy to use, straight to the point; Formio Integration!
This gives us Formio forms within tasklist (start forms and User Task Forms) without having to write new HTML. You can build your formio JSON and import the JSON file.
It provides multiple options and does not require any modifications to camunda modeler.
- Configure the use of the formio form using regular Form Key syntax:
- Deploy .json (form schemas) through the regular deployment API along with your BPMN
- Re-use forms by placing the JSON files in the file system / webapp folder of tasklist:
- Pre-populate fields based on Process Variables / Task Variables: You can define and manipulate which variables to fetch (inlcuding JSON variables). See the Readme.
- Submissions are saved as a json variable (including metadata about the browser that made the submission) See Readme.
- Customize the submission variable name:
- Set a submission as Transient Variable so you can manipulate the submission within the BPMN/transaction before saving as a process variable.
This should give the community a MUCH NEEDED upgrade to the embedded angular forms which have really shown their age and hardships with lack of features over the years…
Feedback is always welcome! | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00233.warc.gz | CC-MAIN-2020-50 | 1,304 | 12 |
https://blog.px.dev/public-beta-launch/beta-launch/ | code | Part 2: A Simple Continuous Cross-Language (Go, Rust, C/C++) Profiler written in eBPF
June 01, 2021
In 2018, we started Pixie to build a magical developer experience to redefine how we explore, monitor, secure, and manage our applications. Today, we’re excited to finally share it with the broader developer community by launching Pixie Community’s Public Beta along with announcing our Series A investment by Benchmark and GV!
Pixie Community is a free forever developer tool which gives developers instant application observability. With Pixie, developers don’t need to change code, manually set up ad hoc dashboards or compromise on how much application performance data they can observe.
At its core, Pixie aims to save developers time. As we moved towards building microservices-based applications on Kubernetes, we grew frustrated by how tedious it was to instrument and analyze performance data. This led us to get heads-down to make three simple ideas real:
No-Instrumentation Data Collection: Pixie leverages novel technologies like eBPF to automatically collect baseline data (metrics, traces, logs and events) for the application, Kubernetes, OS and network layers. For last-mile custom data, developers can dynamically collect logs using eBPF or ingest existing telemetry.
Script-based Analysis: Developers and operators use Pixie by running community contributed, team specific or custom scripts from Pixie’s native debugging interfaces (web, mobile, terminal) or from integrations with established monitoring platforms. This code-based approach enables efficient analysis, collaboration and automation.
Kubernetes Native Edge Compute: Pixie runs entirely inside Kubernetes as a distributed machine data system without customer data being transferred outside. This novel architecture provides customers a secure, cost-effective and scalable way to access unlimited data, deploy AI/ML models at source and setup streaming telemetry pipelines.
In preparation for today’s launch, we started Pixie’s Private Beta in May to get early community feedback. The response from the community was humbling and fundamental in shaping the Pixie. Today, early “Pixineauts” range from developers in early stage startups to engineering teams building internet scale streaming applications who are all already running Pixie in their production Kubernetes environments.
Today, we are also announcing our $9.15 million in Series A funding led by Benchmark with participation from GV. This investment will enable us to open up Pixie to the broader community and continue to improve our developer experience. Our goal is to make Pixie not only the most efficient (and fun!) developer tool but also the most extensible. With this investment, we’ll be doubling down in scaling our community efforts to accelerate the democratization of machine data.
Thank you for all the support to date 🙏 We’re excited to build Pixie for the community and with the community. If you are interested in Pixie’s data superpowers: try Pixie Community's Public Beta, sign-up for Pixie Demo Day, and if you are interested, we’re hiring! | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00558.warc.gz | CC-MAIN-2021-31 | 3,131 | 11 |
http://www.barrons.com/articles/how-are-indian-it-companies-affected-by-us-h1-b-curb-1483681968 | code | India's IT companies tumbled on Friday after two US Congressmen from California introduced a new bill to curb the use of H1-B work visas.
The new bill requires the annual salary of applicants to be at least $100,000, up from $60,000 currently, and eliminates a Master’s degree exemption.
How will the reforms affect the operations of Indian IT companies?
Tech Mahindra (532755.India) has the largest headcount in the U.S, in percentage terms. CLSA kindly provided us with the breakdown. You can see 19% of Tech Mahindra's employees are in the US. (See chart)
HCL Tech (532281.India) sank 4.1%, Tech Mahindra dropped 4%, Tata Consultancy Services (532540.India) was down 2.9%, Infosys (INFY) fell 2.4%, Wipro (WIT) fell 1.6%. Overnight, Accenture (ACCN) dropped 1.5%.
Kotak Securities explained how this process works:
We have spoken to lobbyists in US on this topic and they have indicated increase in cost of doing business in the US.
We believe bills would be introduced in 1QCY17. It will likely come up for voting and committee considerations in the second half of calendar year and changes in regulations could be applicable to Indian IT in FY2019. We understand that the new US government may give precedence to a few other bills/plans such as Border Security Bill, Obamacare, infrastructure spending, etc
The impact on FY2019 earnings for Indian IT would be in the range of 12-22% at gross level.
TCS and Tech Mahindra will face the maximum impact followed by Wipro, Infosys and HCLT. | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607806.46/warc/CC-MAIN-20170524074252-20170524094252-00133.warc.gz | CC-MAIN-2017-22 | 1,493 | 10 |
https://mark-watson.blogspot.com/2004/09/one-of-my-favorite-things-about-java.html | code | One of my favorite things about Java: packaging both code and data in JAR files
Typically, I serialize required runtime data to a binary file and when I create a JAR file I add the binary serialized data file as a top level entry. To read the data into memory, I use something like this:
InputStream ins =
ObjectInputStream p = new ObjectInputStream(ins);
Vector my_vector = (Vector) p.readObject();
Then, I can just use the JAR file in other applications and I have both code and required data. | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00211.warc.gz | CC-MAIN-2021-17 | 495 | 6 |
http://stackoverflow.com/questions/161737/what-are-the-best-asp-net-performance-counters-to-monitor | code | The ones I use the most are the memory counters. All of them. I know that they aren't specific to ASP.NET, but the only problems I've ever had with a web app were memory issues.
Excessive heap, gen 2 collections and % time in GC are the most important ones. If your time in GC is spiraling out of control it's a sign that your UI and viewstate are too big. A large heap and lots of gen 2 collections says you're keeping too much stuff in memory (inproc session state, for example).
Regular ASP.NET apps based on web controls require lots of objects being created and then destroyed quickly, as a page is reconstructed and then disposed. High gen0 collections isn't bad. Its when you start seeing lots of objects make it into gen1 and then gen2 that suggests you're either leaking memory or are holding onto too much state. | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737952309.80/warc/CC-MAIN-20151001221912-00246-ip-10-137-6-227.ec2.internal.warc.gz | CC-MAIN-2015-40 | 822 | 3 |
https://www.cs.uoregon.edu/research/tau/docs/newguide/bk03ch06s02.html | code | To create a TAU launch configuration, click the profile button added near the run and debug buttons. This will provide an interface for launching either a standard or parallel C, C++ or Fortran application, similar to the interface provided by the standard run configuration dialog. You may select a pre-existing run configuration or create a new one in the usual way.
The run configuration options are equivalent to those of a standard run configuration, with the addition of a performance analysis tab a parametric study tab and a TAU tab. To run an application with TAU first make sure that the TAU option is selected in the drop down box on the performance analysis tab. You may also specify that a Tau instrumented executable should not be run after it is built. This option will leave a new TAU specific build configuration available for your use. It will have the name of the original build configuration, with the tau configuration options used appended. The executables available in such build configurations can be run through the standard run and debug launch configurations. This option can be useful if you need to launch Tau instrumented binaries outside of eclipse. There is also an option to select existing performance data. This will upload data specified on the filesystem to a selected database, rather than generating the data from a project in Eclipse.
On the TAU tab you must select
a Tau makefile from the available makefiles in the Tau architecture directory
you specified. You may select specific configuration options to narrow the list
of makefiles in the dropdown box. Only makefiles configured with the
option will be listed. Additional Tau compiler options are provided on the Tau Compiler sub-tab.
If you select a makefile with the PAPI counter library and
enabled you may specify the PAPI environment variables using the Select PAPI
Counters button. The counters you select will be placed in the environment
variables list for your run configuration.
You may specify the use of TAU selective instrumentation either by selecting a pre-defined selective instrumentation file, by selecting the internal option to have Tau to use a file generated by the selective instrumentation commands available in the Eclipse workspace or by selecting the automatic option to have eclipse generate a selective instrumentation file using TAU's tau_reduce utility. Note that the automatic option will cause your project to be rebuilt and run twice.
By default TAU profile data will only be stored in a perfdmf database, if available. The database may be selected on the Data Collection sub-tab. You may specify that performance data should be kept on the file-system with the Keep Profiles option.
If you wish to collect the resulting profile data on TAU's online Portal, check the "Upload profile data to TAU Portal" box. After the profiling has finished you will be prompted to provide your user name, password and specify the destination workspace. To view the profile data log on to the portal and select the specified workspace. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00448.warc.gz | CC-MAIN-2021-43 | 3,048 | 14 |
http://www.40towers.co.uk/ | code | John Cleese, aka Basil Fawlty, has signed an agreement to publish his Autobiography the Independent reports. Cleese will work with Nigel Wilcockson from Random House Books who have acquired the world rights to the publication.
The book follows on from his ‘Alimony Tour’ and according to the big man: "it's the perfect moment to look back on my life in anticipation of the next fifty years.
Going against the advice of his former colleagues who he feared would ‘knee-cap’ him on hearing such news; Fawlty Towers star John Cleese has tied the knot and married for the fourth time.
Cleese has married Bath-based Jewellery designer Jennifer Wade aged 41 in a ceremony on the luxury Caribbean island of Mustique, The
GARSTANG Theatre Group is ringing the changes this Spring. Instead of its usual musical production the group is presenting three episodes from the classic TV series “Fawlty Towers.
The episodes are “The Hotel Inspectors,” “Communication Problems,” and “Basil the Rat”
Secretary Terry Underdown said all the scripts were from the original BBC programmes.
Russell Brand, Coldplay and a host of other acts have taken part in a Secret Policeman's Ball gala in New York - the first to take place outside the UK.
The Amnesty International benefit at Radio City Music Hall was held to mark the organisation's 50th anniversary.
British comedians like Eddie Izzard and David Walliams were joined by such US stars as Jon Stewart and Ben Stiller in the first such event since 2008.
ACTOR David Kelly, who made millions laugh as a dodgy Irish builder in Fawlty Towers, has died at 82. Dublin-born Kelly had an impressive stage, film and TV career spanning 50 years. But it was nine minutes he spent as Mr O'Reilly in the hotel comedy classic in 1975 that earned him lasting fame.
Kelly became an unlikely sex symbol in his late 90s for riding a motorbike naked in the film Waking Ned.
Black Country bride Faye Garrington has told how her dream Princess Di-style wedding did not quite go to plan at a Devon hotel with a “Fawlty Towers” style reputation.
Fall-outs with the management, a row with a hairdresser and a revving scooter led to tears and outbursts from bride Faye.
See more images by clicking on the picture on the right.
LOS ANGELES (TheWrap.com) - Classic episodes of "Fawlty Towers" and "Miss Marple" are just a few of the titles that will be available to Netflix customers in the United Kingdom and Ireland thanks to a new licensing pact with BBC Worldwide.
Starting in early 2012, Netflix subscribers across the pond will be able to access the hilarious mishaps that befall the irritable innkeeper and the small-town murders that occupy the elderly sleuth's time along with other programs from the British television producer, the two companies announced on Tuesday.
Veteran comedy producer John Howard Davies has died at the age of 72.
After an initial short career as a child actor, Howard Davies worked outside of showbusiness before joining the BBC as a production assistant. Whilst at the corporation he produced and directed series of some of the network's most popular and influential comedy successes: Fawlty Towers, The Good Life, The Goodies, Monty Python's Flying Circus, Steptoe And Son and All Gas And Gaiters were amongst those that he worked on.
An original Fawlty Towers script belonging to Prunella Scales has been sold at auction to raise money for north Staffordshire's New Vic Theatre.
The actress, who played Sybil in the sitcom, donated it to the New Vic in memory of the theatre's founder Peter Cheeseman who died last year.
The script fetched more than £2,000 at Sotheby's.
It’s been 30 years since the classic comedy Fawlty Towers was first broadcast on television.
The antics of Basil Fawlty, his wife Sybil, hapless Spanish waiter Manuel and the ever-sensible Polly grabbed the imagination when it first went out on BBC Two on 19th September 1975. Subsequent re-runs of Fawlty Towers have confirmed its position as one of the finest British sit-coms.
The Devon hotel which inspired the classic BBC sitcom Fawlty Towers has been sold for about £1.5m.
Show creator John Cleese based the character of Basil Fawlty on Donald Sinclair, a former owner of the Hotel Gleneagles, in Torquay.
Husband and wife Panna and Kumar Patel have bought Gleneagles with Mr Patel's brother Keethri.
When is that picture frame ever going to get put up Basil? Well, probably never because John Cleese has launched his new web site today (14th November 2004) and it looks packed full of great stuff.
More of a mini Television network than a web site really. Mr. Cleese plans to update his site on an almost daily basis from his new television style studio in his home in America.
A sitcom starring former Monty Python star John Cleese has been dropped from US television after just two episodes.
Bosses at ABC television made the decision because the show, Wednesday 9.30, failed to pull in high ratings.
Many good shows need time to find an audience
Cleese, 61, who played an Australian TV executive who axed shows because of low ratings, was quoted in the Daily Telegraph as saying US TV executives were "scared" by the war for ratings and had "no idea".
BBC Worldwide's German remake of classic comedy Fawlty Towers will be much like the British original - except for the absence of the "Don't mention the war" sketch.
The new series, announced by Worldwide last year, will be produced by Cologne-based company Clou Entertainment and shown on German channel RTL.
Madcap comedy series Fawlty Towers is the UK TV industry's favourite British television programme, according to a survey published on Tuesday.
The poll, conducted by the British Film Institute, asked 1,600 programme-makers, TV critics, writers and executives to give their professional opinions and personal tastes.
Fawlty Towers, the classic British TV comedy series, is to be remade for American television.
A spokesman for John Cleese, the ex-Monty Python star who co-wrote the cult sitcom with his former wife Connie Booth, confirmed that a "changed format deal" has been agreed with CBS though he said nothing has yet been written. | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769321.94/warc/CC-MAIN-20141217075249-00167-ip-10-231-17-201.ec2.internal.warc.gz | CC-MAIN-2014-52 | 6,151 | 39 |
https://primer-computational-mathematics.github.io/book/b_coding/Intro%20to%20Python/19_Classes.html | code | A very popular concept in Computer Science is an object. Objects are theoretical entities that may contain data (fields) as well as specific functionalities (methods). Classes in Python provide a means of implementing types objects.
Oof… that is a lot of abstract terminology. Let us have a look at an example:
class Dog: # initialise an INSTANCE of the object def __init__(self, name): # assign the supplied name to the Dogs name self.name = name # only dogs can bark def bark(self): print("woof-woof!") # lets make some dogs d1 = Dog("Princess") d2 = Dog("Chloe") d3 = Dog("Spooky") # make them bark d1.bark() d3.bark() print(d2.name)
woof-woof! woof-woof! k
class Dog defines a conceptual dog. In the simple model dogs only have a name and can bark, this is what characterises them.
__init__ method is used when a dog is initialised e.g. in the
d1 = Dog("Princess"). Now
d1 is a member of the
Dog class as well as
bark() method is specific to dogs, so we can call it on any instance of a
IMPORTANT: Notice the
self keyword in the definition.
self will always be used as the first argument of the class methods.
self is also used to reference the fields of the object (e.g.
Let us give the
Dog class a bit more functionality:
class Dog: def __init__(self, name): self.name = name self.happy = True def makeHappy(self): self.happy = True def makeSad(self): self.happy = False def bark(self): print("woof-woof!")
Now we have methods that allow us to set the value of the
happy attribute. Every dog is happy when instantiated.
Some concepts can be treated as subsets of others. In this inheritance relation we distinguish two types of classes:
Parent class (base class): the class being inherited
Child class: the class that inherits
Consider the following two classes:
class Dog: def __init__(self, name): self.name = name self.happy = True def makeHappy(self): self.happy = True def makeSad(self): self.happy = False def bark(self): print("woof-woof!") class HappyDog(Dog): def __init__(self, name): super().__init__(name) def makeSad(self): print("Ha ha! I am always happy! Can't make me sad!") hd1 = HappyDog("Scooby") hd1.makeSad() hd1.bark()
Ha ha! I am always happy! Can't make me sad! woof-woof!
As you can probably see,
HappyDog inherits the methods from the
Dog class. It also has its version of the
makeSad method. We say that
HappyDog class overrides the
makeSad method of the
Dog class. We use the
super() method to denote inheritance from the
Whereas child classes can have multiple parent classes, it usually complicates the code and is not considered a good practice.
Classes without the
__init__ method cannot produce objects, they cannot be instantiated. They can still provide functionality, however. On a basic level, they can be treated as our own library. Notice that there is no
self argument in the methods of such class:
#regular class class ComplexNum(): def __init__(self, real, im): self.real = real self.im = im def __str__(self): return str(self.real) +" + " + str(self.im)+"i" #without __init__ class ComplexMath(): def add(com1,com2): return ComplexNum(com1.real+com2.real, com1.im+com2.im) def mul(com1,com2): return ComplexNum(com1.real*com2.real-com1.im*com2.im,com1.im*com2.real+com1.real*com2.real) com1 = ComplexNum(1,1) com2 = ComplexNum(1,-1) print(ComplexMath.add(com1,com2)) print(ComplexMath.mul(com1,com2))
2 + 0i 2 + 2i
ComplexNum is a regular class used to represent complex numbers.
ComplexMath provides some basic arithmetic operations on those numbers. There is no need to instantiate it, we just need the functionality.
__str__ method must be defined if you want to have a nice way of printing the instances of your class.
Classes are a very broad field of programming in Python, and this is just a brief introduction. We will explore classes in the Fundamentals of Computer Science more.
Aliens Define an
Alienclass which instantiates aliens with
ageequal to 1. It should also be able to increase the age of an alien by 1 by calling the
class Alien: def __init__(self): self.age = 1 def birthday(self): self.age+=1
Living Aliens Now upgrade your
Alienclass. Alien is instantiated with the field
True. Now, an alien can
considerDyingby taking a random integer from 0 to 10 (inclusive) and seeing if it is smaller than its age. If it is, the alien dies. You should also add the following method to the class definition:
def reproduce(self): return self.isAlive and random.randint(0,6) > self.age
import random class Alien: def __init__(self): self.age = 1 self.isAlive = True def birthday(self): self.age+=1 def considerDying(self): if random.randint(0,10) < self.age: self.isAlive = False def reproduce(self): return self.isAlive and random.randint(0,6) > self.age
Simulating population Now let us simulate the population of aliens which can die and reproduce.
On each time step (
forloop) we will make all aliens celebrate
An alien should be removed from the population when it dies.
Also, when reproduction is successful, a new alien is added to the population. Make sure to add the new aliens at the end of the time step, so they are not considered it the timestep they were born in.
Do the simulation for
You might want to print the size of the population vs time using matplotlib.
Start with only one alien in the population.
REMARK: This model of the population does not ensure that the size of the population will be stable. For now, the population ceases in the first couple of days or grows exponentially. If you want to explore the simulation more, consider adding population size in
reproduce method, it should limit the exponential growth.
import random import matplotlib.pyplot as plt class Alien: def __init__(self): self.age = 1 self.isAlive = True def birthday(self): self.age+=1 def considerDying(self): if random.randint(0,10) < self.age: self.isAlive = False def reproduce(self): return self.isAlive and random.randint(0,6) > self.age aliens = [Alien()] timeSeries = for i in range(100): aliensToAdd = 0 for alien in aliens: alien.birthday() if alien.reproduce(): aliensToAdd+=1 alien.considerDying() aliens = [x for x in aliens if x.isAlive] for new in range(aliensToAdd): aliens.append(Alien()) timeSeries.append(len(aliens)) x = list(range(100)) y = timeSeries plt.plot(x,y) plt.show() | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00667.warc.gz | CC-MAIN-2023-06 | 6,259 | 64 |
https://rosettagit.org/drafts/category-mpl/ | code | ⚠️ Warning: This is a draft ⚠️
This means it might contain formatting issues, incorrect code, conceptual problems, or other severe issues.
If you want to help to improve and eventually enable this page, please fork RosettaGit's repository and open a merge request on GitHub.
MPL is a multi-precision (long integer) arithmetic library for AutoHotkey.
Discussion and usage can be found on its [http://www.autohotkey.com/board/topic/19940-long-integer-arithmetic-library/ topic page], or the latest version can be [http://ahkscript.org/boards/code_downloader.php?id=11949&part=2 directly downloaded]. It currently only works in 32bit versions of AHK. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00825.warc.gz | CC-MAIN-2023-40 | 655 | 5 |
https://forum.howtoforge.com/threads/mod_proxy-rewrite-rules.9709/ | code | Hi all, I'm trying to test the mod_proxy stuff here. My goal is to have a XP server running apache behind my linux box and then if the URL has /app in it, it will pull from the xp server for my dynamic content applications I'm writing. The reason for this is the engine that I'm using has more features and is more powerful on the windows side vs the linux side. So in ISPconfig for my site, as a test, I added the following to the apache directives area; ====================================== ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /app http://192.168.11.1 ProxyPassReverse /app http://192.168.11.1 ======================================= However, after a save and check again, I get 'NOT SUPPORT' messages with my stuff commented out. So my question is, how do I accomplish this? Thanks, Ken PS. I'm basically sending it to my router website as a test. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104277498.71/warc/CC-MAIN-20220703225409-20220704015409-00779.warc.gz | CC-MAIN-2022-27 | 895 | 1 |
https://ilyenahirskyjdouglas.wordpress.com/ | code | I am a Postdoctoral Researcher in Human-Computer Interaction (HCI) at Aalto University. My work explores how we can use technologies to connect with each other through digital versions of ourselves and exploring how animals can interact with computer systems.
I am interested in building animal controlled technologies to explore what it means to interact as an animal (a non-human one) and how as system designers we can build and support an animals interaction with technology. Part of my passion is to push the boundaries of what is known towards creating a better environment for the animals and people we share our world with.
I got my Masters degree in Computer Science from the University of Central Lancashire (UCLan), where later in I completed my PhD in HCI focusing on methods for Dog-Computer Interaction.
I publish mostly in HCI, but also in animal behaviour and cognition, robotics and augmented reality systems in top conferences in CS such as CHI to CSCW.
Feel free to send me a message for a chat, check out my blog where I talk to researchers, blog about conferences I’ve been too and projects I take in my spare time, read my publications or look at my previous projects for more information on what I’ve worked on so far.
I always have several exciting Master’s thesis topics. To inquire topics, please email me with your CV, transcript, and letter of interest.
Email: ilyena.hirskyj-douglas [at] aalto. fi
Google Scholar: scholar.google.co.uk/citations?user=9z8DutMAAAAJ&hl=en&oi=ao | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573444.87/warc/CC-MAIN-20190919060532-20190919082532-00392.warc.gz | CC-MAIN-2019-39 | 1,509 | 8 |
https://flowingdata.com/2020/09/25/what-states-are-doing-to-make-mail-in-ballots-clearer/ | code | Mail-in ballots can be rejected if they’re not filled out or mailed correctly. A small percentage of them always are. This year, when we’re talking millions of mail-in ballots, even a small percentage means a lot of tossed ballots. For NYT’s The Upshot, Larry Buchanan and Alicia Parlapiano show how some states modified the design of their ballots to reduce the rejections.
What states are doing to make mail-in ballots clearer
Projects by FlowingData See All →
The Change My Son Brought, Seen Through Personal Data
I combed through personal data that I’ve actively and passively collected since early graduate school to see how life is different now with a 6-month old.
Basketball Stat Cherry Picking
Wow your friends during the game with random win percentages, based on various player stats.
Remote Workers vs. Non-Remote Workers
How the schedules between remote and non-remote workers differ during workdays. | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00366.warc.gz | CC-MAIN-2021-04 | 923 | 9 |
http://freebsdsoftware.org/graphics/togl.html | code | May 26, 2018
Tk OpenGL widget
Togl is a Tk widget for OpenGL rendering. Togl is based on OGLTK, originally written by Benjamin Bederson at the University of New Mexico who has since moved to the University of Maryland. Togl adds the new features - color-index mode support including color allocation functions - support for requesting stencil, accumulation, alpha buffers, etc - multiple OpenGL drawing widgets - OpenGL extension testing from Tcl - simple, portable font support - overlay plane support
Togl allows one to create and manage a special Tk/OpenGL widget with Tcl and render into it with a C program. That is, a typical Togl program will have Tcl code for managing the user interface and a C program for computations and OpenGL rendering. Togl is copyrighted by Brian Paul [email protected] and Benjamin Bederson [email protected]. See the LICENSE file for details. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00401.warc.gz | CC-MAIN-2022-27 | 890 | 4 |
https://security.stackexchange.com/questions/234001/openssl-recover-passphrase-with-encrypted-and-not-encrypted-file | code | This is known as a known-plaintext attack. From the Wikipedia article quoted above:
The known-plaintext attack (KPA) is an attack model for cryptanalysis
where the attacker has access to both the plaintext (called a crib),
and its encrypted version (ciphertext). These can be used to reveal
further secret information such as secret keys and code books.
Modern encryption algorithms such as AES are designed to be highly resistant to known-plaintext attacks, however earlier encryption algorithms were susceptible to known-plaintext attacks. For example, with XOR encryption, it is trivial to find the encryption key given the ciphertext and the plaintext. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233508977.50/warc/CC-MAIN-20230925115505-20230925145505-00654.warc.gz | CC-MAIN-2023-40 | 656 | 6 |
https://scds.github.io/dmds-22-23/WhatIsDS.html | code | Have you ever wanted to learn more about Digital Scholarship? This constantly evolving field includes data analysis and visualization, Digital Humanities, and beyond. Join the conversation by watching the recording of this panel and learn about the many possibilities for those working in Digital Scholarship.
Presentations include (in order of presenter):
- Andrea Zeffiro and Jay Brodeur (Co-Directors of the Sherman Centre) on the Centre’s work and the field of Digital Scholarship at large.
- Jess Rauchberg (PhD Candidate in Communication Studies and Media Arts and past SCDS Graduate Resident) on disabled creators, content moderation, and how disabled Instagram creators reconfigure platform use through memes.
- Vivek Jadon (Data Specialist and Co-Lead of the DASH Program) on his work bringing data analysis skills to McMaster through DASH and his current project, the Hamilton Open Data Portal, which collects and makes available data tables on topics including business, immigration, and the environment.
- Maddie Brockbank (PhD Candidate in Social Work and past SCDS Graduate Resident) on her web resource “Learning in Colour,” which consolidates information, resources, and guidance on creating safer classroom and campus spaces for BIPOC students.
- Danica Evering (Research Data Management Specialist) on her work bringing Research Data Management skills to the McMaster Community and the importance of proper RDM practices.
- Shaila Jamal (PhD Candidate in Geography and Earth Sciences, past SCDS Graduate Resident, and current DASH Support Assistant) on her work creating an OER with Dr. Antonio Paez for a course offered in the Earth, Environment, and Society program.
- Learning in Colour
- Research Data Management Services at McMaster
- DASH: The Data Analysis Support Hub | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102612.80/warc/CC-MAIN-20231210155147-20231210185147-00839.warc.gz | CC-MAIN-2023-50 | 1,799 | 11 |
https://community.qlik.com/t5/New-to-Qlik-Sense/Controlling-the-scale-of-a-visualization/td-p/1435395 | code | I have a bar chart with "Model" as the primary dimension along the x-axis, and "age class" as a secondary dimension. It appears that how many sets of "Models" (each with one or more age class bars) can be shown is fixed and not possible to control. Changing the size of the window does not make a difference. In my graph, I would like to have thinner bars in order to be able to have more "Models" shown at the same time (minimizing scrolling), but I cannot find a way to control this. It seems that there is a maximum of ~49 bars (including 1 "bar-size" empty space between each model). If the bars could be made thinner, showing at least 100 bars would not be a problem.
I also would like to have each model equally spaced; now, if the a certain model does not have a particular value in the "age class", there will be no empty space (as in an Excel graph) and the result will be un-equally spaced "model" data set, which makes graph hard to read. Is there way to change that? | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743007.0/warc/CC-MAIN-20181116091028-20181116113028-00450.warc.gz | CC-MAIN-2018-47 | 978 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.