url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://windowsforum.com/threads/cannot-shutdown-windows-vista.11111/ | code | I purchased a new HP Pavillion a year ago which came with Windows Vista OS. Until recently it's been fine. But, lately it has not been shutting down using the normal method. I have to manually turn off the machine. How can I resolve this problem?
As Kemical says ....give us as much detail regarding what or what does not happen when you click shutdown at the start menu.
If it is not shutting down by normal means then some program or application is preventing it from happening. Have you installed any new software recently???
As Fatboy mentions have you updated or changed any drivers?? If you have try the rollback to previous drivers.
Another thing to try is system restore. See if this sorts the problem? If so try and remember what has changed since the last restore.
Incidently, have you installed SP1??????? | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864919.43/warc/CC-MAIN-20180623015758-20180623035758-00586.warc.gz | CC-MAIN-2018-26 | 816 | 6 |
http://s-p.mit.edu/publicity/anno_view.php?action=extended&Event_ID=b4100c55590e96ba2f5cfcb91e4d2107 | code | |SP-CoSI Graduate Dinner Seminar|
Thursday, Apr 27, 6:30-8pm, Sid Pac MP Room .
|General Audience Graduate Seminar provides MIT students the venue for more scholarly and open interactions with their peers. |
Integrated Design and Management
> Olivia Fiebig
Department of Chemistry
> Alpha Yacob Arsano
Sustainable Design Lab
School of Architecture
Sponsors: ODGE Graduate Student Life Grants
For more info visit https://www.facebook.com/SPCoSISeminars/ More...
General Audience Graduate Seminar provides MIT students the venue for more scholarly and open interactions with their peers.
> Amy Zhang (Human Computer Interaction) MIT CSAIL
> Claudia Pérez D'Arpino (Interactive Robotics) MIT CSAIL
> Nancy Aggarwal (Laser Interferometric Gravitational Wave Observatory- LIGO) MIT Physics
This seminar series aims to:
1) Help inspire interdisciplinary collaborations, as students learn about problems in other fields that could be attacked using their own expertise.
2) Provide a venue to spark new collaborations and interdisciplinary projects.
3) Increase graduate student exposure to fields outside of their primary area of research.
4) Give graduate students an opportunity to practice public speaking in a low-pressure environment of peers.
5) Help graduate students think about presenting their work to non-expert audiences.
6) Increase students' comfort levels with unfamiliar research topics.
Join us for the seminar and dinner at MIT's Sidney Pacific graduate residence! | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812913.37/warc/CC-MAIN-20180220070423-20180220090423-00239.warc.gz | CC-MAIN-2018-09 | 1,476 | 23 |
https://am.notsowise.net/2008/10/06/deleted-second-slice/ | code | I ordered another slice to play with on 4 September. I attempted again to install LEMP which I did successfully. I think the LEMP server has been doing a good job and few days ago I also tried restoring a backup image to my first slice, swapped IP’s between slices, all my sites are now running off my original first slice.
So the second slice is idling and I deleted it today. I have played with it and updated my LAMP howto, I couldn’t find use of it now.
I guess if I have spare time after dinner I better spend more time on the black and write keyboard to ensure better sleep. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00353-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 584 | 3 |
https://issues.jenkins.io/browse/JENKINS-42880 | code | we upgraded from jenkins 2.47 to 2.50 since then nearly all of our ant build steps are failing.
I found out why :
We have following checks in our ANT Scripts:
Problem is we are using property
And "env" is suddenly empty - nothing else changed, ANT_HOME for example is set but check is failing.
I didn't find any clue in jenkins changelog or here in JIRA so im filing this isssue. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100229.44/warc/CC-MAIN-20231130161920-20231130191920-00191.warc.gz | CC-MAIN-2023-50 | 379 | 6 |
http://workbook.craftingdigitalhistory.ca/ | code | Crafting Digital History, A Workbook by Shawn Graham and Rob Blades
For more advanced tutorials and help, please note the following:
- Shawn Graham, shawn dot graham at carleton dot ca, @electricarchaeo
CAUTION: Photosensitivity and eye strain
Some of the digital tools and tutorial videos used in this workbook can potentially cause eye strain or effect those with photosensitivity. Make sure to take breaks often, relax your eyes, and stretch.
Most exercises in this workbook are accompanied by a video tutorial. Please enable closed captions for more direction.
Original content by Shawn Graham is
licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817729.0/warc/CC-MAIN-20240421040323-20240421070323-00580.warc.gz | CC-MAIN-2024-18 | 687 | 8 |
https://www.springerprofessional.de/en/the-geographies-of-covid-19/23659952 | code | This volume of case studies focuses on the geographies of COVID-19 around the world. These geographies are located in both time and space concentrating on both first- and second-order impacts of the COVID-19 pandemic. First-order impacts are those associated with the immediate response to the pandemic that include tracking number of deaths and cases, testing, access to hospitals, impacts on essential workers, searching for the origins of the virus and preventive treatments such as vaccines and contact tracing. Second-order impacts are the result of actions, practices, and policies in response to the spread of the virus, with longer-term effects on food security, access to health services, loss of livelihoods, evictions, and migration. Further, the COVID-19 pandemic will be prolonged due to the onset of variants as well as setting the stage for similar future events. This volume provides a synopsis of how geography and geospatial approaches are used to understand this event and the emerging “new normal.” The volume's approach is necessarily selective due to the global reach of the pandemic and the broad sweep of second-order impacts where important issues may be left out. However, the book is envisioned as the prelude to an extended conversation about adaptation to complex circumstances using geospatial tools.
Using case studies and examples of geospatial analyses, this volume adopts a geographic lens to highlight the differences and commonalities across space and time where fundamental inequities are exposed, the governmental response is varied, and outcomes remain uncertain. This moment of global collective experience starkly reveals how inequality is ubiquitous and vulnerable populations – those unable to access basic needs – are increasing. This place-based approach identifies how geospatial analyses and resulting maps depict the pandemic as it ebbs and flows across the globe. Data-driven decision making is needed as we navigate the pandemic and determine ways to address future such events to enable local and regional governments in prioritizing limited resources to mitigate the long-term consequences of COVID-19. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655092.36/warc/CC-MAIN-20230608172023-20230608202023-00211.warc.gz | CC-MAIN-2023-23 | 2,162 | 2 |
https://sa-mebel-ekanom.ru/forum/?download=945 | code | Serial key newest 'ssh' Questions - Page 87 - Unix & Linux Stack Exchange
Most kernels are built with this already, but if you have customised your kernel you may not have built it. Solution: rebuild your kernel. PuTTY X11 proxy: MIT-MAGIC-COOKIE-1 data did not match Which I imagine is because the cookie is different for this session than the last one I copied to root's session. Invalid MIT-MAGIC-COOKIE-1 keyE cannot open displayInvalid MIT-MAGIC-COOKIE-1 keyInvalid MIT-MAGIC-COOKIE-1 keyInvalid MIT-MAGIC-COOKIE-1 key Press ENTER or type command to continue At which point I would get thrown into regular console vim. Xhost local: is not secure if your machine can have multiple users (and even if it doesn't currently, that's a bad habit). Invalid mit-magic-cookie-1 key centos.
- Debian, SpaceNavigator PE, 3DxWare V1.2.7, failed to get
- SSH X11 forwarding with sudo and missing magic cookies
- Solved - "Can't open display"
- Invalid MIT-MAGIC-COOKIE-1 keyInvalid MIT-MAGIC-COOKIE-1
- How To Install Ubuntu 20.04 LTS On VirtualBox [Windows
- FS#48088: x2goclient depends on xorg-xauth
- 136976 – execution using ssh fails with "BadWindow
- Upgraded to 7.6 - laptop keyboard no longer works | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00188.warc.gz | CC-MAIN-2021-21 | 1,201 | 10 |
http://docs.codehaus.org/exportword?pageId=229740541 | code | Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_41757_630616793.1371676768023" ------=_Part_41757_630616793.1371676768023 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
We hav= e a received a request on geoto= ols-devel to relicense a subset of the codebase under the apache license to= facilitate collaboration with the apache sis-dev community.
The code= contribution agreement, cited belo= w, clearly allows contributors to continue to reuse and repurpose their= own work. As such this proposal is strictly focused on establishing a work= ing relationship with the Apache foundation to facilitate making a subset o= f the codebase available under a dual license.
The GeoTools project m= akes use of several apache projects, and is familiar with the requested Apa= che License.
To m= eet the above request:
Discussion on this topic has taken place on several geotools email li= sts:
An= d the next board meeting:
OSGeo discuss:<= /p>
Related discussi= on about migrating code between projects and carefully across license chang= es:
Alternatives shortlisted= in email discussion:
This proposal is in up for review:
AFTER (using wording of current apache license):
This initial request is w= ithin the limits of the GeoTools code contribution agreement:=C2=A0GeotoolsAssignmentToO= SGeo.pdf=C2=A0as detailed in the following sections:
Section II A= ssignment of Copyright
Section V. Obligations of the Foundation=C2=A0
The relative section of the above bylaws of the foundation m= entioned above is:
Section=C2=A0VII. Assignment of Agreement | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709224828/warc/CC-MAIN-20130516130024-00067-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 1,750 | 17 |
https://daniellemmiller.com/tag/first-impression | code | You never get a second chance to make a first impression
In the digital space this is a crucial concept
I have seen so many beautifully and brilliantly designed websites, and then I see it
Something glaring…like a hair in your soup.
A word used incorrectly (I have a personal grudge against the misuse of “loose” and “lose”)
A major typo
A link that doesn’t work
Your brand experience has just bombed…and not in a glittery good way
None of us are perfect (because where is the fun in that anyway?), but it is imperative that you get outside eyeballs that you trust to fine-tooth-comb things for you. It doesn’t mean that you won’t get the client, but you have put an unnecessary obstacle in your path to do so.
You have poured your heart and soul into this representation of who you are and the work you do and one slip up can seriously damage it
Have it proofed again…and again…and again
Your brand reputation and brand experience depend on it | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817382.50/warc/CC-MAIN-20240419074959-20240419104959-00653.warc.gz | CC-MAIN-2024-18 | 966 | 12 |
http://crativelearningcenter.blogspot.com/2013/09/object-oriented-programming-its.html | code | Object oriented programming treats data as a critical element in the program development and these elements called objects. The objects- oriented programming (OOP) is a programming language which model organized around the objects. Oops allow decomposition of a problem with number of entities called objects.
In the objects oriented programming language, object refers to a specific type or “instance” of a class. Each object has a structure similar to other objects in the class, but each object can assign individual characteristics. Object can also call functions or methods. Basically, Objects are the run- time entities in the object oriented programming. It may represent a person, a place, class students, a bank account, a table of data or any items that the program is to be use.
Objects oriented programming has one of the programming buzzword today. These appears to be a great deal of interest and excitement among the software engineers in using Oops.
Some Features of object oriented programming:
It is emphasis the data rather than procedure.
In the Oops, programs are divided into objects.
Data structure are designed such they characterize the objects.
Functions that operate on the data of an object are tied together.
Data is hidden and cannot be accessed by external functions.
Objects can communicate with each other through the functions.
The programming areas for application of Oops are-
- Real time systems,
- Objects oriented data base,
- Simulation and modeling,
- Artificial Intelligence
- Expert system
- Natural networks and parallel programming
- Hypertext, hypermedia
- Decision Support and office automation system
- CIM (computer Integrated Manufacturing) system,
- CAM (computer Aided Manufacturing) / CAD (Computer Aided design) system,
The most popular application of OOP has been the area of user interface design such as Windows. Most Windowing systems have been developed using the Oops technology.
Benefits of objects oriented programming:
- Through Inheritance features of objects oriented programming, we can create object for one program and it can easily be reused in the other programs.
- We can build program from the standard working module that communicate with one another.
- The principle of data hiding helps the programmer to build secure programs. And programmers can hide a particular part of code in the programme for security.
- It is easy to work partition in a project-based on objects.
- Large programs are very difficult to write, but in the object oriented programs programmer to go through an extensive planning phase, which makes better designing flow for programme.
- Object oriented system can be easily upgraded from small to large systems and its maintenance charge is very less.
- Object oriented system base software complexity can be easily managed.
- Object oriented programming leads to saving of development time and it providing higher productivity. | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056892.13/warc/CC-MAIN-20210919160038-20210919190038-00153.warc.gz | CC-MAIN-2021-39 | 2,929 | 31 |
http://fitness.stackexchange.com/questions/tagged/exercise-equipment+exercise-technique | code | Physical Fitness Meta
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Correct position on an upright exercise bike?
Let's say I use something similar to the exercise bike shown below. When I used to do track, my friends told me while exercising on this machine, to keep a straight back (maybe even a little leaned ...
Oct 3 '12 at 1:39
newest exercise-equipment exercise-technique questions feed
Hot Network Questions
Are weird numbers more rare than prime numbers?
Illustrators standard colour swatches has vanished in "rogue" pdf document
Which is kernel similar gaussian kernel?
Generally speaking, is it better to make all the functional parts or get UI working first - or a mix of the two?
A simple prompt-and-print program
Automatically lining up two images that have some common elements
Break the popularity contest crypto
How to take a cold shower
Why are magnetic fields so much weaker than electric?
How to tell TikZ to first compute and then display a variable?
Is this kids experiment a legitimate way to show that air has mass?
"Reason to visit" or "Reason for visit"
How to find the projection of an image?
Get SObject by Id
How to change to last modified directory in current directory
Current store is 1 when running upgrade scripts
What hazardous wildlife is there in the UK?
What is the natural way of using Dataset to get a FittedModel?
how to get the basename of complicated files
Who has the higher authority, the pilot in command or ATC?
Crop jpeg into circular tikz node
Can anyone identify the manufacturer of this golden lens?
How to number the letters of a sentence?
Does spyware exist for ubuntu?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0 | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00145-ip-10-146-231-18.ec2.internal.warc.gz | CC-MAIN-2014-23 | 2,214 | 53 |
http://sghill.blogspot.com/2010/08/23-hours.html | code | The trip itself was a rather uneventful 23 hours, but I felt the need to document it for both my own sake the potential interest of future TWU participants. I lost track of how many time zones I flew through. It was sunny when I left Chicago and sunny 8 hours later in Frankfurt. We arrived in Bangalore at 12:20 in the morning.
Most of it is already a blur at this point, but a few things that still stand out:
- wondering if it was just coincidence a flight attendant named "Stickler" who maintained a perfect hairdo and uniform throughout the flight
- thinking the $300 "upgrade" to economy plus is a laughable overcharge when checking in
- thinking the $300 upgrade to economy plus is an incredibly good deal five hours in
- "So you're traveling to India for work? How long have you been with this company?"
"It's my second day"
- wishing Mercedes would win the bus contract for the Chicago Transit Authority
- admiring the smart and efficient use of the inclined plane in Frankfurt security
- the immigration officer not believing my picture or my signature was me, asking me to sign something in front of him...which I'd gladly do all day, because my signature is pretty awesome.
- Bangalore International Airport isn't much different from O'Hare. This shockingly huge ad appeared in both:
The real adventure begins on the way to the office from the airport. More on that here | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863043.35/warc/CC-MAIN-20180619134548-20180619154548-00433.warc.gz | CC-MAIN-2018-26 | 1,382 | 12 |
http://tengtehao.com/info/2127/50401.htm | code | 郑刚,男,现为法国国家信息自动化研究所一级研究员、博导,IEEE高级会员,法国动态系统建模、分析与决策委员会会员,法国里尔大学中法联合实验室副主任。曾任法国国家学术委员会控制专委会副主席,法国北方自动化和人机智能系统委员会副主席, 以及国际自动控制联合会(IFAC) TC 9.2技术委员会副主席,其主要研究兴趣包括复杂动态系统的观测与控制、及其在机器人中的相关应用。目前在Springer出版专著1本,在国际会议及期刊上发表学术论文160余篇,包括约20篇Automatica及IEEE TAC,参与并主持多项法国国家自然科学基金项目。
讲座地点:腾讯会议直播(ID:368 227 364 链接:https://meeting.tencent.com/dm/Vlyebpch4RFw)
摘要: Recently soft robotics has rapidly become a novel and promising area of research with many designs and applications due to their flexible and compliant structure. However, it is more difficult to derive the nonlinear dynamic model of such soft robots. In this talk, two different modeling techniques will be presented. The first method is based on Cosserat rod theory, where micro structure has been imposed for the continuum medium to facilitate the modeling of soft slender type of robot. By applying Newton-Euler approach, the differential kinematics and dynamics of the soft manipulator can be formulated as a set of highly nonlinear partial differential equations via the classic Cosserat rod theory. Then a discrete modeling technique, named piecewise linear strain, is proposed to solve the deduced PDEs, based on which the associated analytic models are obtained. The second modeling approach is the so-called finite element method to model soft robot with arbitrary shape, where the particles are only equipped with position vectors.
讲座地点:腾讯会议直播(ID:394 451 613 链接:https://meeting.tencent.com/dm/bbqGLSdoY4dk)
摘要: Nowadays, the process of designing soft robots is still governed by trial-and-error or bio-inspired notions. Given a particular soft robot’s configuration, evaluating its reachable workspace is still an open subject, and is very essential for other soft robotics’ main scientific challenges, such as control, trajectory planning, and design optimization. For this topic, three different techniques will be presented in this talk. The first one is an optimization-based approach that consists of mapping the exterior boundaries of the workspace. This method can successfully reduce the complexity of the workspace estimation compared to the forward approach, but cannot provide interior knowledge to the workspace. The second approach is the interval analysis technique which consists of exploring all feasible configurations in the workspace of soft robots. However, this method is relatively exhaustive since it consists of exploring the whole feasible configurations in the workspace. To reduce the computation complexity, an alternative methodology to determine all boundaries of soft robots’ workspaces, named as the continuation approach, will be presented.
讲座地点:腾讯会议直播(ID:168 887 107 链接:https://meeting.tencent.com/dm/V5Hjf5Ja8wfg)
摘要: Unlike rigid robotics, elastic deformation of soft robot results in infinite degrees-of-freedom motions. Therefore, the control theory developed for rigid robot was poorly applicable in this case. Moreover, numerical models used for modeling deformation were not particularly well adapted for feedback control: too slow to compute and too difficult to analyze (because of the lack of analytical form). In the literature, a lot of researches are using model-free approaches for control, where PID controller is the simplest and most used one to control soft robots. However, such a controller can only achieve local and slow-motion tracking. Another method is driven by empirical data and applies machine learning techniques to design controllers. Such an approach mainly works for static models (thus cannot realize fast motion control), and the learned model varies case by case (thus cannot treat the unpredictable external disturbance). In this talk, we present how to use precise deformation modeling methods (Cosserat and FEM) for the control, since it allows us to take into account the physics (and even the multi physics) of the robot and the interaction with the environment. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00113.warc.gz | CC-MAIN-2023-23 | 4,453 | 7 |
https://www.ruby-toolbox.com/projects/nbppl | code | Simple API client to fetch exchange rate to PLN for selected currency and date from http://api.nbp.pl
Add this line to your application's Gemfile:
And then execute:
Or install it yourself as:
$ gem install nbppl
# fetch mid rate for specific currency and date Nbppl::Client.new.fetch_mid_rate("EUR", Date.parse("2021-06-01")) # fetch mid rate from closest date in the past Nbppl::Client.new.closest_mid_rate("USD", Date.parse("2021-06-06")) => [3.6931, #<Date: 2021-06-04 ((2459370j,0s,0n),+0s,2299161j)>] # use caching by making multiple calls with same class instance client = Nbppl::Client.new client.fetch_mid_rate(a) client.fetch_mid_rate(a) # no unnecessary api call # or by using Nbppl::Client.current Nbppl::Client.current.fetch_mid_rate(a) Nbppl::Client.current.fetch_mid_rate(a) # no unnecessary api call
After checking out the repo, run
bin/setup to install dependencies. Then, run
rake test to run the tests. You can also run
bin/console for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run
bundle exec rake install. To release a new version, update the version number in
version.rb, and then run
bundle exec rake release, which will create a git tag for the version, push git commits and tags, and push the
.gem file to rubygems.org.
Bug reports and pull requests are welcome on GitHub at https://github.com/meceo/nbppl.
The gem is available as open source under the terms of the MIT License. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00120.warc.gz | CC-MAIN-2021-25 | 1,462 | 17 |
https://community.blynk.cc/t/solved-unstable-connection-with-the-cloud/6444 | code | For a month my heating-management project was running extremely stable on a an Arduino Mega + ethernet shield.
My Blynk app runs on One+ Android.
The history graphs and the commands worked without loss of data and quickly.
But since 24th of may, all graphs and buttoms are dead on my One+.
I tried to get connected with an refreshed “auth. token” : no change.
I tried the most simple program (1 buttom - one led): this worked for a while, but not stable (alternatively variable number of seconds access / no access to the cloud).
I tried this simple tests with new project under the same account, but also under a brand new account : same bad result
I tried external and USB power supply without any change.
I tried the USB connection (blynk-ser script) : this connection to the cloud was very stable !!!
I tried Blynk application
The laptop used for the USB tests, was connected with the same router to the internet.
To exclude hardware failure,I double checked all the (one led / one buttom) tests with 2 different Arduino Boards and two different ethernet shields. The results were identically bad. I paid attention to use different auth. codes for each ethernet shield.
internet quality recently upgrade to 200/20Mbps, (relevant ? ) | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358074.14/warc/CC-MAIN-20211126224056-20211127014056-00624.warc.gz | CC-MAIN-2021-49 | 1,240 | 13 |
https://www.skookum-films.com/epidemic-sound-discount-code/ | code | Epidemic Sound Discount Code
EPIDEMIC SOUND DISCOUNT CODE
LOOKING FOR AN EPIDEMIC SOUND DISCOUNT CODE?
GRAB THIS LIMITED TIME OFFER BEFORE IT'S GONE
SAVE UP TO $1100 WITH OUR EPIDEMIC SOUND DISCOUNT CODE TODAY
FREE TRIAL + 80% OFF ON COMMERCIAL SUBSCRIPTION
FREE TRIAL + 20% OFF ON PERSONAL SUBSCRIPTION
SIMPLY FOLLOW THE LINK BELOW
THE DISCOUNT CODE WILL BE AUTOMATICALLY ADDED AT CHECKOUT!
SAVE $1100 ON EPIDEMIC SOUND TODAY!
GET THE ONE MONTH EPIDEMIC SOUND FREE TRIAL + 80% DISCOUNT AND SAVE $1100 ON AN EPIDEMIC SOUND COMMERCIAL ANNUAL SUBSCRIPTION
GET THE ONE MONTH EPIDEMIC SOUND FREE TRIAL + 20% DISCOUNT AND SAVE $425 ON AN EPIDEMIC SOUND PERSONAL ANNUAL SUBSCRIPTION
Two Free Months
Two Free Months
GET A STORYBLOCKS FREE TRIAL
Would you like to try Storyblocks before signing up? Thanks to our Storyblocks Promo Code (AKA Videoblocks Promo Code) you can do so by following the CLICKING HERE! You'll get a 7-day Storyblocks free trial!
We're a music company soundtracking the new generation of storytellers. We own and manage the world’s largest library of royalty free music. Our cloud-based service contains over 30,000 tracks covering an extensive range of genres, all of which are created with the sole purpose of enhancing audiovisual productions. All tracks are immediately available for downloading, sharing and use on our website epidemicsound.com
We're based in Stockholm, Sweden, and have offices in New York, LA, Hamburg, Amsterdam, Madrid and Sydney. We're working closely with a few hundred talented composers and producers from various nations, and we have clients and collaborators all over the world.
Explore and experiment with music for your videos using our stems; choose between the full track, melody, drums, bass or instrumentals.
UNLIMITED ACCESS TO 30.000 TRACKS
Diverse music, carefully created by composers, producers, instrumentalists and artists regularly featured on major streaming platforms. Unlimited downloads. Unlimited uploads, on all platforms.
HOW DOES IT WORK?
Artlist Free Audioblocks Trial Epidemic Sound Epidemic Sound Code Promo Epidemic Sound Discount Code Epidemic Sound Free Epidemic Sound Free Trial Epidemic Sound Promo Code Epidemic Sound Promotional Code Epidemic Sound Trial Sound Stripe Discount Code Sound Stripe Promo Code Soundstripe Code Promo Soundstripe Code Promotionnel Soundstripe Código Promocional Soundstripe Códigos Promocionais Soundstripe Coupon Soundstripe Coupon Code Soundstripe Coupon Codes Soundstripe Coupons Soundstripe Discount Soundstripe Discount Code Soundstripe Discount Codes Soundstripe Discounts Soundstripe Free Soundstripe Promo Code Soundstripe Promo Codes Soundstripe Promotional Code Soundstripe Promotional Codes Soundstripe Voucher | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601615.66/warc/CC-MAIN-20200121044233-20200121073233-00467.warc.gz | CC-MAIN-2020-05 | 2,733 | 23 |
https://replit.com/@StinkyWinkleton?tab=community | code | I did a stinky
Business Simulator *BETA*Made with Python
A basic idle game in the console
STORE HIRING FIRING STATS WORK HOME In DevelopmentRecent comments (4)
Added the COMMANDS command to bring up the list of commands
I made it so you don't have to type STORE again. Now if you're in the STORE you need to type EXIT to exit the store. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510358.68/warc/CC-MAIN-20230928031105-20230928061105-00027.warc.gz | CC-MAIN-2023-40 | 336 | 6 |
http://harveymckinnon.com/services/needs | code | Here are some things to consider if your organization is thinking about starting a direct mail program.
At Harvey McKinnon Associates, we find that direct mail is most effective for nonprofit organizations that:
- Have a clear and compelling mission
- Can use emotion to tell their story
- Have an active member base of 8,000 (or the potential to reach this level)
- Have an annual budget of $100,000 to invest in direct mail (much less to test) to see if there is an audience for your organization.
- Have other income sources such as grants and special events
- Believe in the potential of direct mail
- Know that direct mail is a long-term investment that produces significant income, over time. | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828313.74/warc/CC-MAIN-20160723071028-00122-ip-10-185-27-174.ec2.internal.warc.gz | CC-MAIN-2016-30 | 698 | 9 |
https://www.analyticsvidhya.com/blog/2021/09/hypothesis-testing-in-machine-learning-everything-you-need-to-know/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2021/06/linear-predictive-models-part-1/ | code | Everything you need to know about Hypothesis Testing in Machine Learning
This article was published as a part of the Data Science Blogathon
What is Hypothesis Testing?
Any data science project starts with exploring the data. When we perform an analysis on a sample through exploratory data analysis and inferential statistics we get information about the sample. Now, we want to use this information to predict values for the entire population.
Hypothesis testing is done to confirm our observation about the population using sample data, within the desired error level. Through hypothesis testing, we can determine whether we have enough statistical evidence to conclude if the hypothesis about the population is true or not.
How to perform hypothesis testing in machine learning?
To trust your model and make predictions, we utilize hypothesis testing. When we will use sample data to train our model, we make assumptions about our population. By performing hypothesis testing, we validate these assumptions for a desired significance level.
Let’s take the case of regression models: When we fit a straight line through a linear regression model, we get the slope and intercept for the line. Hypothesis testing is used to confirm if our beta coefficients are significant in a linear regression model. Every time we run the linear regression model, we test if the line is significant or not by checking if the coefficient is significant. I have shared details on how you can check these values in python, towards the end of this blog.
Key steps to perform hypothesis test are as follows:
- Formulate a Hypothesis
- Determine the significance level
- Determine the type of test
- Calculate the Test Statistic values and the p values
- Make Decision
Now let’s look into the steps in detail:
Formulating the hypothesis
One of the key steps to do this is to formulate the below two hypotheses:
The null hypothesis represented as H₀ is the initial claim that is based on the prevailing belief about the population.
The alternate hypothesis represented as H₁ is the challenge to the null hypothesis. It is the claim which we would like to prove as True
One of the main points which we should consider while formulating the null and alternative hypothesis is that the null hypothesis always looks at confirming the existing notion. Hence, it has sign >= or , < and ≠
Determine the significance level also known as alpha or α for Hypothesis Testing
The significance level is the proportion of the sample mean lying in critical regions. It is usually set as 5% or 0.05 which means that there is a 5% chance that we would accept the alternate hypothesis even when our null hypothesis is true
Based on the criticality of the requirement, we can choose a lower significance level of 1% as well.
Determine the Test Statistic and calculate its value for Hypothesis Testing
Hypothesis testing uses Test Statistic which is a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test.
Select the type of Hypothesis test
We choose the type of test statistic based on the predictor variable – quantitative or categorical. Below are a few of the commonly used test statistics for quantitative data
|Type of predictor variable||Distribution type||Desired Test||Attributes|
|Quantitative||Normal Distribution||Z – Test||
|Quantitative||Positively skewed distribution||F – Test||
|Quantitative||Negatively skewed distribution||NA||
Z-statistic – Z Test
Z-statistic is used when the sample follows a normal distribution. It is calculated based on the population parameters like mean and standard deviation.
One sample Z test is used when we want to compare a sample mean with a population mean
Two sample Z test is used when we want to compare the mean of two samples
T-statistic – T-Test
T-statistic is used when the sample follows a T distribution and population parameters are unknown. T distribution is similar to a normal distribution, it is shorter than normal distribution and has a flatter tail.
F-statistic – F test
For samples involving three or more groups, we prefer the F Test. Performing T-test on multiple groups increases the chances of Type-1 error. ANOVA is used in such cases.
Analysis of variance (ANOVA) can determine whether the means of three or more groups are different. ANOVA uses F-tests to statistically test the equality of means.
F-statistic is used when the data is positively skewed and follows an F distribution. F distributions are always positive and skewed right.
F = Variation between the sample means/variation within the samples
For negatively skewed data we would need to perform feature transformation
For categorical variables, we would be performing a chi-Square test.
Following are the two types of chi-squared tests:
- Chi-squared test of independence – We use the Chi-Square test to determine whether or not there is a significant relationship between two categorical variables.
- Chi-squared Goodness of fit helps us determine if the sample data correctly represents the population.
The decision about your model
Test Statistic is then used to calculate P-Value. A P-value measures the strength of evidence in support of a null hypothesis. If the P-value is less than the significance level, we reject the null hypothesis.
if the p-value < α, then we have statistically significant evidence against the null hypothesis, so we reject the null hypothesis and accept the alternate hypothesis
if the p-value > α then we do not have statistically significant evidence against the null hypothesis, so we fail to reject the null hypothesis.
As we make decisions, it is important to understand the errors that can happen while testing.
Errors while making decisions
There are two possible types of error we could commit while performing hypothesis testing.
1) Type1 Error – This occurs when the null hypothesis is true but we reject it.The probability of type I error is denoted by alpha (α). Type 1 error is also known as the level of significance of the hypothesis test
2) Type 2 Error – This occurs when the null hypothesis is false but we fail to reject it. The probability of type II error is denoted by beta (β)
Hypothesis testing in python
The stats model library has the unique ability to perform and summarize the outcomes of hypothesis tests on your model. Based on your feature variables, you can determine which test value is relevant for your model and make decisions accordingly.
import statsmodels.api as sm
To create a fitted model, I have used Ordinary least squares
lr = sm.OLS(y_train, X_train_lm).fit()
Once we have trained the model, we can see the summary of the tests using the command
The model summary will look something like below.
From a hypothesis testing standpoint, you need to pay attention to the following values decide if you need to refine your model
- Prob (F-statistic) – F-statistic tells us the goodness of fit of regression. You want the probability of F-statistic to be as low as possible to reject the null hypothesis.
- P-value is given in the column P>|t| – As mentioned above, for a good model, we want this value to be less than the significance level.
This is all about hypothesis testing in this article.
Image source: All images in this blog have been created by the author
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00001.warc.gz | CC-MAIN-2023-50 | 7,442 | 69 |
https://forums.theregister.com/post/reply/1419155 | code | Zero-pressure v. superpressure balloon.
After a post in one of the recent article forums, I've been doing some digging about home build zero pressure or superpressure balloons.
I can understand the reasoning for choosing a weather balloon. They are cheap (relatively anyway), easy to obtain, no fuss in getting them back down, lots of previous knowledge to go by.
But from what I can find, for rockoon flights the big boys (NASA and the likes) seem to be using zero-pressure or Polyethyleen film valved super-pressure envelopes. The advantages of using a zero-pressure design are big. The balloon soars to a maximum altitude and stays there until commanded to drop, thus giving you a much larger launch window and a lot less fuss about how to time the launch. (Just use a timer long enough that you can be sure the balloon reached (near) maximum altitude). There seem to be quite a few HAM radio enthusiast using homebuilt zero-pressure balloons for high-altitude flights.
For instance there is this "tutorial": http://diydrones.com/profiles/blogs/team-prometheus-how-to-make-a-zero-pressure-high-altitude-balloon (look in the comments, the author posted the tutorial amongst the comment thread, not the most readable, but it seems a good guide)
The University of Cambridge seems to have dabbled in high altitude ballooning as well, might be worth it to give them a ring. (http://youtu.be/uK80MXHQ5hA)
A variation is the valved super pressure balloon. This design maintains a slightly higher inside pressure, but limits the pressure differential through a spring loaded valve to just below burst pressure. This has the advantage of a slightly more rigid envelope, keeping its shape better in gusts of wind. | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00550.warc.gz | CC-MAIN-2020-50 | 1,706 | 7 |
https://irl.metafilter.com/tags/pets | code | Let's have a zoom (or whatever platform) to show off our critters and just generally hang out! All with pets are welcome, all without pets who just want to see pets are welcome! The only agenda here is LET ME SEE YOUR PETS. Time/date: Sunday, February 21 at 2pm Central time. Platform still TBD, let me know your prefs in thread! | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00006.warc.gz | CC-MAIN-2022-21 | 329 | 1 |
https://pypi.org/project/jupyterlab-requirements/ | code | JupyterLab Extension for dependency management and optimization
Dependency management and optimization in JupyterLab.
This extension provides management of dependencies for JupyterLab notebooks.
The main goals of the project are the following:
- manage notebook requirements without leaving the notebook
- provide a unique and optimized* environment for each notebook
NOTE: The requirements are optimized using the Thoth resolution engine
- JupyterLab >= 3.0
You can install this extension with pip:
pip install jupyterlab-requirements
And start using it immediately on JupyterLab:
If you are seeing the frontend extension, but it is not working, check that the server extension is enabled:
jupyter server extension list
If the server extension is installed and enabled, but you are not seeing the frontend extension, check the frontend extension is installed:
jupyter labextension list
jupyter-nbrequirements extension for JupyterLab can be easily used from the notebook in JupyterLab.
This jupyterlab extension provides a button directly in the notebook to manage the dependencies (see image below).
Start adding dependencies from empty notebook
Clicking the above button you will receive the following dialog form initially:
Initially, no dependencies are identified if you start a new notebook as metadata related are not existing. The extension checks in the notebook metadata in order to identify them every time you restart a notebook. Moreover it verifies that the kernel you are using is matching your dependencies. If not it warns to use install button again to avoid weird behaviours.
You can start adding your packages using the central add button and once you select package name and version, remember to add your package using add button in action, otherwise it won't be saved (in the future this behaviour will not be necessary due to the autocompletion feature):
NOTE: The extra button in action will be removed in the future.
NOTE: Autocompletion is planned in the future so that user can check which version are available on PyPI.
Save dependencies added and install them in your customized kernel
After saving the install button will appear so you can check before actually installing the dependencies:
NOTE: You can choose the name of the kernel you want for your notebook.
Using the Thoth resolution engine you can request an optimized software that satisfies your requirements using the Thoth recommender system. You can choose the type of recommendation that better fits your needs:
You can find more information and updates here.
Finally after using the install button:
Now all dependencies will be locked (direct and transitive), saved in the notebook metadata, and installed. Moreover, the kernel will be automatically created and set for your notebook without human intervention required.
Now you are ready to work on your project!
If you restart notebook and check dependencies with button you will see that they are all installed and ready:
Start notebook without information about dependencies in metadata
If you have notebooks with code and you want to start using this extension, there is a nice feature that can be interesting.
Thoth relies on a library called invectio. This library statically analyzes sources and extract information about called or exported library functions in Python applications.
jupyterlab-requirements extension uses this information to provide users with list of packages to be installed if they have never used the extension before.
Currently Thoth is used by default and pipenv is backup. In the future user will be able to select specific one.
Virtual environment for you dependencies
Virtualenv created to run your notebook according to your dependencies requirement is created in:
Once lock file is created using any of available resolution engine. The dependencies are installed in the virtualenv using micropipenv.
The dependencies stored in the notebook metadata are also stored into
overlays folder (created automatically) using the kernel name by default.
If you want to know more about the use of overlays, have a look here.
Thoth configuration file
Thoth resolution engine is able to provide an optimized software stack based on the runtime environment you are using (more inputs are used, if you want to know more, have a look here here).
In general different runtime environment will provide different effect on you application (e.g. more performance), therefore we include these information in the notebook metadata so that other can find out what runtime environment has been used to run a certain notebook.
Note: You will need NodeJS to build the extension package.
jlpm command is JupyterLab's pinned version of
yarn that is installed with JupyterLab. You may use
npm in lieu of
# Clone the repo to your local environment # Change directory to the jupyterlab-requirements directory # Install package in development mode pip install -ve . # Link your development version of the extension with JupyterLab jupyter labextension develop . --overwrite jupyter serverextension enable --py jupyterlab-requirements --sys-prefix # Rebuild extension Typescript source after making changes jlpm run build
You can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.
# Watch the source directory in one terminal, automatically rebuilding when needed jlpm run watch # Run JupyterLab in another terminal jupyter lab
With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).
By default, the
jlpm run build command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:
jupyter lab build --minimize=False
pip uninstall jupyterlab-requirements
Demo development status and new features
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size jupyterlab_requirements-0.6.4-py3-none-any.whl (328.8 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size jupyterlab_requirements-0.6.4.tar.gz (96.6 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for jupyterlab_requirements-0.6.4-py3-none-any.whl
Hashes for jupyterlab_requirements-0.6.4.tar.gz | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038084601.32/warc/CC-MAIN-20210415065312-20210415095312-00463.warc.gz | CC-MAIN-2021-17 | 6,703 | 66 |
http://createdbymissie.blogspot.com/2011/06/thank-you-for-cool-dude.html | code | A few weeks ago my family and I spent some time on the east coast. While we were gone we asked a friend's little boy to keep an eye on our turtles. For those of you who don't know we have three red eared slider turtles that we have had for about a year now. As my youngest son has some pretty serious alergies they are the only pets we are going to have (at least for now). Anyway, when we got home I whipped up this simple thank you card the boys who watched the turtles. | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00416.warc.gz | CC-MAIN-2018-30 | 472 | 1 |
https://zh.ifixit.com/Answers/View/96573/Can+not+get+the+white+screen+to+go | code | Can not get the white screen to go
i really hope you can help.
i have an ipod touch 4th gen, i have changed the screen and it was working for a short time after. like a few minutes. while i had it apart.
but i switched the ipod off put it all back together and i am now getting the white screen.
the sound is still working and so are all of the buttons, i can hear the lock and unlock sounds and i can turn on voiceover with a triple click and the volume on that will go up and down but i just cant get rid of the white screen.
i have tried all of the fixes i can find on here and on google but nothing seems to be working.
i have tried the screen on another ipod 4th gen and it works on that without error. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00051.warc.gz | CC-MAIN-2021-43 | 707 | 7 |
http://www.alexchinco.com/phase-change-in-high-dimensional-inference/ | code | In my paper Feature Selection Risk (2014), I study a problem where assets have different attributes and traders try to identify which of these attributes matter via price changes:
with each asset’s exposure to a given attribute given by and the noise is given by . In the limit as , , , and there exists both a signal opacity bound, , as well as a signal recovery bound, :
with in units of transactions. I explain what I mean by “” in Section 4 below. These thresholds separate the regions where traders are arbitrarily bad at identifying the shocked attributes (i.e., ) from the regions where traders can almost surely identify the shocked attributes (i.e., ). i.e., if traders have seen fewer than transactions, then they have no idea which shocks took place; whereas, if traders have seen more than transactions, then they can pinpoint exactly which shocks took place.
In this post, I show that the signal opacity and recovery bounds become arbitrarily close in a large market. The analysis in this post primarily builds on work done in Donoho and Tanner (2009) and Wainwright (2009).
2. Motivating Example
This sort of inference problem pops up all the time in financial settings. Suppose you moved away from Chicago a year ago, and now you’re moving back and looking for a house. When studying a list of recent sales prices, you find yourself a bit surprised. People seem to have changed their preferences for of different amenities: a car garage, a third bedroom, a half-circle driveway, granite countertops, energy efficient appliances, central A/C, or a walk-in closet? The mystery amenity is raising the sale price of some houses by dollars. How many sales do you need to see in order to figure out which of the amenities realized the shock?
The answer is . How did I arrive at this number? Suppose you found one house with amenities , a second house with amenities , and a third house with amenities . The combination of the price changes for these houses reveals exactly which amenity has been shocked. i.e., if only the first house’s price was too high, , then Chicagoans must have changed their preferences for car garages:
By contrast, if , then people must value walk-in closets more than they did a year ago.
Here’s the key point. The problem changes character at observations. sales is just enough information to answer yes or no questions and rule out the possibility of no change: . sales simply narrows your error bars around the exact value of . sales only allows you to distinguish between subsets of amenities. e.g., seeing just the first and second houses with unexpectedly high prices only tells you that people like either half-circle driveways or walk-in closets more… not which one.
Yet, the dimensionality in this toy example can be confusing. There is obviously something different about the problem at observations, but there is still some information contained in the first observations. e.g., even though you can’t tell exactly which attribute realized a shock, you can narrow down the list of possibilities to attributes out of . If you just flipped a coin and guessed after seeing transactions, you would have an error rate of . This is no longer true in higher dimensions. i.e., even in the absence of any noise, seeing any fraction of the required observations for will leave you with an error rate that is within a tiny neighborhood of as the number of attributes gets large.
3. Non-Random Analysis
I start by exploring how the required number of observations, , moves around as I increase the number of attributes in the setting where there is only shock and the data matrix is non-random. Specifically, I look at the case where and . My goal is to build some intuition about what I should expect in the more complicated setting where the data is a random matrix. Here, in this simple setting, the ideal data matrix would be -dimensional and look like:
where each column of the data matrix corresponds to a number in binary.
Let be a function that eats observed price changes and spits out the set of possible preference changes that might explain the observed price changes. e.g., if traders only see the st transaction, then they can only place the shock in of sets containing attributes each:
The nd transaction then allows traders to split each of these larger sets into smaller ones and place the shock in a set of possibilities:
With the rd transaction, traders can tell that the actual shock is either of possibilities:
The th observation then closes the case against the offending attribute.
Here’s the key observation. Only the absolute difference between matters when computing the size of the output of . If traders have seen transaction, then they can tell which subset of attributes has realized a shock. If traders have seen transactions, then they can tell which subset of attributes has realized a shock. If traders have seen observations, then they can tell which subset of attributes has realized a shock. Thus, after seeing any number of observations , traders can place the shock in a set of size . i.e., a trader has the same amount of information about which attribute has realized a shock in (i) a situation where and he’s seen transactions as in (ii) a situation where and he’s seen transactions.
The probability that traders select the correct attribute after seeing only observations is given by assuming uniform priors. Natural numbers are hard to work with analytically, so let’s suppose that traders observe some fraction of the required number of observations . i.e., for some traders see observations. We can then perform a change of variables and answer the question: “How much does getting additional observation improve traders’ error rate?”
I plot this statistic for ranging from to below. When , a trader’s predictive power doesn’t start to improve until he sees transactions (i.e., of ); by contrast, when a trader’s predictive power doesn’t start to improve until he’s seen transactions (i.e., of ). Here’s the punchline. As I scale up the original toy example from attributes to million attributes, traders effectively get useful information about which attributes realized a shock until they come within a hair’s breadth of the signal recovery bound . The opacity and recovery bounds are right on top of one another.
4. Introducing Randomness
Previously, the matrix of attributes was strategically chosen so that the set of observations that traders see would be as informative as possible. Now, I want to relax this assumption and allow the data matrix to be random with elements :
recovers the true when it is -sparse. i.e., when has only non-zero entries . Since the linear system is underdetermined; however, if the level of sparsity is sufficiently high (i.e., is sufficiently small), then there will be a unique solution with high probability.
First, I study the case where there is no noise (i.e., where ), and I ask: “What is the minimum number of observations needed to identify the true with probability for using the linear program in Equation (10)?” I remove the noise to make the inference problem as easy as possible for traders. Thus, the proposition below which characterizes this minimum number of observations gives a lower bound. I refer to this number of observations as the signal opacity bound and write it as . The proposition shows that, whenever traders have seen observations, I can make traders’ error rate arbitrarily bad (i.e., ) by increasing the number of attributes (i.e., ).
Next, I turn to the case where there is noise (i.e., where ), and I ask: “How many observations do traders need to see in order to identify the true with probability in an asymptotically large market using the linear program in Equation (10)?” Define traders’ error rate after seeing observations as:
denotes the probability that the linear program in Equation (10) chooses the wrong subset of attributes (i.e., makes an error) given the true support and averaging over not only the measurement noise, , but also the choice of the Gaussian attribute exposure matrix, . Traders’ error rate is the weighted average of these probabilities over every shock set of size . Traders identify the true with probability in an asymptotically large market if:
Thus, the proposition below which characterizes this number of observations gives an upper bound of sorts. I refer to this number of observations as the signal recovery bound and write it as . i.e., the proposition shows that, whenever traders have seen observations, they will be able to recovery almost surely no matter how large I make the market.
Proposition (Wainwright, 2009): Suppose , , , and , then traders can identify the true with probability in an asymptotically large market if for some constant :
The only cognitive constraint that traders face is that their selection rule must be computationally tractable. Under minimal assumptions a convex optimization program is computationally tractable in the sense that the computational effort required to solve the problem to a given accuracy grows moderately with the dimensions of the problem. Natarajan (1995) explicitly shows that constrained linear programming is NP-hard. This cognitive constraint is really weak in the sense that any selection rule that you might look up in an econometrics or statistics textbook (e.g., forward stepwise regression or LASSO) is going to be computationally tractable. After all, they have to be executed on computers.
What is really interesting is that the signal opacity bound, , and the signal recovery bound, , basically sit right on top of one another when the market gets large just as you would expect from the analysis in Section 3. The figure above plots each bound on a log-log scale for varying levels of sparsity. It’s clear from the figure that the bounds are quite close. The figure below plots the relative gap between these bounds:
i.e., it plots how big the gap is relative to the size of the signal recover bound . For each level of sparsity, the gap is shrinking as I add more and more attributes. This is an identical result as in the figure from Section 3: as the size of the market increases, traders learn next to nothing from each successive observation until they get within an inch of the signal recovery bound. The only difference here is that now there are an arbitrary number of shocks and the data matrix is random. | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00512.warc.gz | CC-MAIN-2019-26 | 10,438 | 31 |
https://community.babycenter.com/post/a67364158/graco-4ever-car-seat?commentBy=TmXDd3nGd8tOSAEo | code | Original poster's comments (1)
We just installed the car seat, my baby is 9.5 months old, she looks comfortable in the seat, but when she falls asleep her head falls forward. It does not look like a safe sleeping position at all. I took the insert around her head out and it didn't make any difference. I tried to recline it back to a number "2" but then on the indicator the bubble went out of the blue line. I don't understand why the recline option is there? We have our other seat that is not installed yet sitting on our living room floor, and just for the heck of it I tried reclining it to "2" and the bubble is out of the blue line. Can someone help me here? It seems like I need to recline the seat, but apparently it's not safe? I must be missing something. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00441.warc.gz | CC-MAIN-2022-33 | 767 | 2 |
https://flashgames.bambousoft.com/games/zombie-cockroach.html | code | Continue from Smack-A-Lot : Zombie, this time a cockroach has been infected by the Zombie virus after contracting it from a fallen Zombie. Smack it! With the new blocking and annoying score penalty system, this time it's not going to be easy :D
Use the keyboard keys 'A' and 'D' or touchscreen to smack the zombie cockroach. 'S' fending off zombie cockroach's attacks. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100800.25/warc/CC-MAIN-20231209040008-20231209070008-00042.warc.gz | CC-MAIN-2023-50 | 368 | 2 |
http://www.perlmonks.org/?node_id=212208 | code | I'd say that it should be an warning, or a do-nothing. The + modifies the previous assertation. In this case, the previous assertation is (?#...), which asserts nothing about the stream. Asserting nothing a bunch of times should have the same effect as not asserting nothing at all, or asserting nothing once -- no effect. Of course, asserting nothing more then once probably isn't what you meant, but there's no way of telling what you did mean, so we should warn.
Warning: Unless otherwise stated, code is untested. Do not use without understanding. Code is posted in the hopes it is useful, but without warranty. All copyrights are relinquished into the public domain unless otherwise stated. I am not an angel. I am capable of error, and err on a fairly regular basis. If I made a mistake, please let me know (such as by replying to this node). | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890795.64/warc/CC-MAIN-20180121155718-20180121175718-00431.warc.gz | CC-MAIN-2018-05 | 848 | 2 |
https://pypi.org/project/experimenter/ | code | Use git tags to log experiments and the exact code that was used to run those experiments. The goal is to make every experiment completely reproducible. To install use
pip install experimenter
All contributions are welcome. Experimenter was inspired by [Ali's](http://arkitus.com/patterns-for-research-in-machine-learning/) and [Charles'](http://www.theexclusive.org/2012/08/principles-of-research-code.html) blog posts.
* It is the case that you might make some changes in your machine learning code, not necessarily worth committing, and spawn a process to run an experiment that uses these changes. However, by the time the experiment has finished you don't know what changes you were testing (presumably because you made more changes in the meanwhile).
* You need a distributed way of collecting all the experiments (parameters and results), making sure that they have been run on the exact same version of the code.
Create an `ExperimentLogger` object, passing the parameters of the experiment. When the experiment is finished, call
the `record_results()` method of that object, ie:
with ExperimentLogger(name="NameOfExperiment", parameters=parameters_dict) as experiment_logger:
Behind the scenes, a git tag will be created (committing any changes you may have in the working tree, into a different branch). The tag will have a name of the form `exp_NameOfExperiment_timestamp` and in the message it will have a JSON representation of the parameters and the results (when/if recorded). The working state of the current branch will seemingly remain unaffected. Note that this is *not* thread-safe.
If no result is recorded (ie `record_results` is not called) within the `with` then the experiment will be deleted upon exit from that block. This is useful when stopping experiments or when experiments fail before finishing.
There is a command-line tool that helps with retrieving tests. In the git folder, run the command
> experimenter -c SHAofCodeState
to retrieve all the experiments that happened with the give code version. If `-c` is not provided then all experiments will be shown. If `-s` is provided only experiments that have results will be shown. The strict command is off by default. Use `--help` for more information.
* A command line tool for starting/stopping experiments.
* Auto-push tags method in `ExperimentLogger`
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320003.94/warc/CC-MAIN-20170623045423-20170623065423-00312.warc.gz | CC-MAIN-2017-26 | 2,450 | 16 |
https://www.geeksforgeeks.org/difference-between-singly-linked-list-and-doubly-linked-list/?ref=leftbar-rightbar | code | Difference between Singly linked list and Doubly linked list
Introduction to Singly linked list : A singly linked list is a set of nodes where each node has two fields ‘data’ and ‘link’. The ‘data’ field stores actual piece of information and ‘link’ field is used to point to next node. Basically the ‘link’ field stores the address of the next node.
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. To complete your preparation from learning a language to DS Algo and many more, please refer Complete Interview Preparation Course.
Introduction to Doubly linked list : A Doubly Linked List (DLL) contains an extra pointer, typically called previous pointer, together with next pointer and data which are there in singly linked list.
Singly linked list vs Doubly linked list | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00340.warc.gz | CC-MAIN-2021-49 | 918 | 5 |
http://mwomercs.com/forums/user/19144-caballo/page__tab__topics | code | This is a screenshot from a game just ended:
Funny Uh? Well, the game before that one we fought the same team (who obviously were just having fun, and i have no offense at all against them) And they were riding 8 Higlanders. We had the same mech configuration.
Being a Patch day, i find it impossible to believe it was because we both were the only 8-men teams around at that moment, so... Is this what we can expect from the matchmaking system? Are you, devs, still considering the matchmaking-ELO-tonnage thingie works just as intended?
CaballoMember Since 06 Nov 2011
Offline Last Active Today, 12:51 PM
- Group Veteran Founder
- Active Posts 350 (0.62 per day)
- Profile Views 3053
- Member Title Member
- Age 36 years old
- Birthday September 6, 1976
"Mechs are mobile war machines. You're either moving, or you're dead"
Mech simulators, car simulators, plane simulators...
- Website URL http://www.battletech-mechwarrior.org | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705069221/warc/CC-MAIN-20130516115109-00088-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 930 | 14 |
https://www.svl.net/ufaqs/what-is-the-rlreporting-limit/ | code | Reporting Limit (RL)?is the lowest concentration at which an analyte can be detected in a sample and its concentration can be reported with a reasonable degree of accuracy and precision. A criterion of ? 20% accuracy and 20% RSD for replicate determinations is often used to define ?reasonable?. The acceptable ranges depend somewhat on the analytical methodology used. For samples that do not pose a particular matrix problem, the RL is typically about three to five times higher than the MDL. Similar to the MDL, the RL is a laboratory-specific number, which may change with time. When a sample has to be diluted before analysis, either because of matrix problems or to get the instrument response within the linear dynamic range, the RL is raised by a factor corresponding to the dilution factor. The RL should generally be below any regulatory limits, but notifying the lab of reporting limit requirements is highly recommended. | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00366.warc.gz | CC-MAIN-2020-16 | 932 | 1 |
https://forum.emclient.com/t/pc-freezes-v7-1-33101-0/47886 | code | After updating to version 7.1.33101.0, I have regular freezing of the PC when emClient is in background regardless to the programs in foreground.
I surveilled this effect now over more than a week. The only common thing on freeze is the running emClient in background.
Contradiction: on my laptop there is no freezing in identical circumstances. Therefore it might be a side effect of my desktop pc hardeware: i7-7700 CPU 3.6 GHz, 64GB RAM, Win10x64 (auto update).
Because it is in the background I can only suppose, that it occurs on syncing the (POP3-)accounts.
Fact is: having emClient closed no freezing happens.
Hardware checks ended all up without any findings.Drivers are up to date.
I am aware that my “bug description” is quite mysterious. But maybe someone else out there has this effect, too. Or maybe the developers have an idea what happens.
Closing emClient after usage is time consuming because I have a huge amount of e-mails (> 50k) in six accounts, archiving was set to “older than 1000 days”
I reduced this to to 300 days (just yet) to have a faster start up. Because the next days I have to work without risk of data loss. Therefor I will close emClient during work and open it on my laptop simultanuously.
I am using a lifetime licensed 3 user version.
Can you exit Windows and boot into BIOS. Then run a complete memory diagnostic.
As I said all hardware checks ended up with no findings. Memory is tested and o.k. (before I placed this post). In the next few days I will take a closer look at the peripheral divices in order to rule out interference from there.
Please do nothing about this until furhter notice!
It seems as if there is a problem with mouse or keyboard. I changed hardware to another computer and the same freezing happens now there. Without running emClient.
I have to investigate this (without having an idea how exactly) but there is a good chance that emClient has nothing to do with it.
It will be good to know either way NoSi.
As far as I could investigate emClient has nothing to do with this problem. Looking into deep I fount out that besides emClient other not that obviously seen software was running during the effect.
I can’t say for sure if that was the one to blame but after the exchange of a (updated) mouse driver addition (to previous one) for my logitech trackball since then the hanging did not appear again. That this locked the keyboard, too, was missleading.
This thread can be closed (from my side).
The described effect was still appearing. Excluding more and more applications as a possible reason, I came to the conclusion that it is has to be a hardware problem.
After changing the graphics card – that mastered all tests regarding to this with flying colors – it seems as if it is really solved. I have to admit that I can not explain what drove me to the desicion to replace an apparently working graphics card.
Glad it is sorted NoSi.
Maybe it was the graphics driver and not the card itself.
Definitifly not. All drivers were part of detailed investigations. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00757.warc.gz | CC-MAIN-2022-40 | 3,045 | 24 |
https://acklenavenue.com/careers/senior-apple-ios-developer | code | Remote - Based in AMER/LATAM
Seeking someone with experience that can be a quick resource for our leads to answer best practices and proven development strategies on Apple TV.
- Required Technologies: Swift 5.0+.
- Preferred Technologies: Domain of Xcode 10.2+ IDE, Knowledge of RESTful APIs, Understanding of OOP and UI Design, Understanding of Apple's design principles and interface guidelines. Proficient understanding of code versioning tools such as Git.
- Strongly preferred Communication and Leadership Skills.
- Strongly preferred Agile experience.
- Senior Level.
- Full time job (40 hours weekly).
- Permanent contract.
- C1 Advanced English Level.
- Availability from 8-5pm in the CST time zone.
- Remote Work. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00198.warc.gz | CC-MAIN-2022-27 | 722 | 12 |
https://www.avrfreaks.net/forum/trouble-programming-attiny85-avrisp-mkii | code | I'm trying to program an ATTiny85 via AVRISP and AVR Studio 4.19 and getting an ISP mode error in the process. I want to use the internal 8 MHz oscillator, which should be the default, but the fuse appears to be set to an external clock (see attached screenshots). When I try to change it to internal and hit Program, I get the error. I have a 1K pullup resistor on RESET, and have the programming frequency set to 1 MHz. Any thoughts about what I should check or do differently? Thanks.
Sorry if this should be in the AVR Studio forum. Seemed equally at home here, so I flipped a coin. ;-) | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00247.warc.gz | CC-MAIN-2020-45 | 590 | 2 |
http://www.basho.cz/2015/04/06/tips-of-the-week-2015-03-30/ | code | Tips of the week 2015-03-30
- Neal Ford: Architecture is abstract until operationalized
- Tugberk Ugurlu: Compiling C# Code Into Memory and Executing It with Roslyn
- Ionic Blog: Angular 2 Series: Introduction
- Cameron Lerum: MAT v4.0 Technical Preview Update 3 with Xamarin support The Multilingual App Toolkit v4.0 Technical Preview Update 3 is available today (v4.0.1262.0).
- Stephen Siciliano: Create a new logic app
- Mark Downie: Structured Data Markup–Improving your SEO and Google Search Presentation
- Kundana Palagiri: Chef Server in Marketplace, Chef Azure Provisioning and more
- Microsoft: mail2bug is a service that creates work-items in TFS (incl. VS Online) from an email, and keep the items updated with responses on the thread.
- slackapi-angularjs AngularJS module wrapper for the Slack Web API and oAuth helpers for token authentication.
- .NET Rocks: #1119 Azure App Service with Scott Hunter
- SE Radio: #224 Sven Johann and Eberhard Wolff on Technical Debt
- Hello World Podcast: #52 Matt Pietrek | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826354.54/warc/CC-MAIN-20181214210553-20181214232553-00581.warc.gz | CC-MAIN-2018-51 | 1,023 | 13 |
http://www.javaprogrammingforums.com/%20java-networking/19413-java-socket-rmi-programming-help-printingthethread.html | code | I am quiet new to Network programming in Java and want to learn how do we read packet headers in Java programs. i.e. I want to read the data in packet header of the Transmission or Network Layer in a client server architecture. If anyone can help, I would be grateful.
Also if you can suggest some links or books to understand the power of RMI programming.
Thanx in Advance | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737952309.80/warc/CC-MAIN-20151001221912-00202-ip-10-137-6-227.ec2.internal.warc.gz | CC-MAIN-2015-40 | 373 | 3 |
http://www.tomshardware.com/forum/274955-32-external-hard-disk | code | External Hard Disk
I have a 500GB Seagate usb 2.0 External Mass Storage Device (Plug and Play) with my important data on it, and I have been using it without any problem for several months. Now when I plug it in, the auto-play function starts for few seconds but no files appears, the drive comes up in my computer as drive “E” as it usually does, but then when I click on it, it says my external hard drive is not formatted and asks me if I want to format it. I assume reformat it would erase all of the data I have on it so I didn't do it but when I click "no" I can't access the drive. The drive properties show 0kb for free space and 0kb for used space. What happened to the around 70 GB of files I stored on the drive? Is there another way to access the data on the drive? | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00161.warc.gz | CC-MAIN-2018-09 | 781 | 2 |
http://our.angelsgait.com/post/what-does-2-mean | code | - What does 2 mean on twitch
- What does 2 mean in python
- What does 2 mean in texting
- What does 2 mean in automatic cars
- What does 2 mean on gear shift
- What does 2 mean in r
- What does 2 mean angel number
At its very core the 2 in Numerology represents partnerships -- the coming together or balancing of two individual people, concepts, or things. While it holds great power over any situation, it wields it with such diplomacy and tact that the result is not control and authority, but harmony and teamwork.
Symbol Name and Meaning Example Problem; x: variable: an unknown value that needs to be found, other letters from the alphabet can be used as well: 2x = 4. 2/2x = 4/2. x = 2 ∝ proportional: proportional to: ƒ(x) ∝ g(x) lemniscate: infinity ∞ + 1 = ∞ ⌈ ⌉ ceiling brackets: round …
The data are plotted in Figure 2.2, which shows that the outlier does not appear so extreme in the logged data. The mean and median are 10.29 and 2, respectively, for the original data, with a standard deviation of 20.22. Where the mean is bigger than the median, the distribution is positively skewed.
· Question: "What does it mean to preach the Word (2 Timothy 4:2)?" Answer: Second Timothy is likely the final letter that the apostle Paul wrote. It is written to Timothy, who was his “son in the faith” (1 Timothy 1:2) and personal envoy.
It means it might be able to process a maximum of 4 threads per core. So a 2-core CPU with multi-threading of 4 means it can possibly process a maximum of 8 threads or routines at the same time. 2 cores multiplied by 4 threads per core equals a ma...
@dbr cmd 2>&1 >>file does not redirect stderr to the file, but cmd >> file 2>&1 does. Order matters. In the first case, stderr is redirected to the stdout of the shell (possibly a tty if the command is entered interactively), and then stdout is directed to the file.
I dont have a pic but let me try my best to describe it for you guys. It has multiple forms that it comes on. Like a fifty cent piece. And on one side it has like a sheild with a crest or a horse with one leg bent on the front legs bent. A rider with sword or joust at half mast . oh ya and a serpent coming out of the water and the horse looks to be about to trample it with one of its front ...
That means that the assignment itself has a value, and -for fundamental types- this value is the one assigned in the operation. For example: y = 2 + (x = 5); In this expression, y is assigned the result of adding 2 and the value of another assignment expression (which has …
· The :-) notation is known as a smiley, and means that the statement it follows was intended as humor. When you tilt your head to the side, you see that : is the eyes, - the optional nose, and ) is the mouth. This notation is often used in email, text messages, and other postings to communicate emotional context that would otherwise be lost or unclear.
· what definition: 1. used to ask for information about people or things: 2. used in questions that show you are…. Learn more. | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514796.13/warc/CC-MAIN-20210118123320-20210118153320-00136.warc.gz | CC-MAIN-2021-04 | 3,040 | 17 |
https://blogs.sap.com/2018/04/25/adding-chat-bot-to-a-website-using-sap-web-ide-part-one/ | code | Adding Chat Bot To A Website Using SAP Web IDE – Part One
Over the last few months I have become more interested in chat bots and how they can serve customers. After a bunch of research and testing I decided to start a project in order to learn about bots.
This is the first of 2 articles that I have planned. In this article, I explain what a chatbot is and show how to get started using SAP Web IDE. In the follow up article I will finish the integration to a chatbot. I am planning on using Amazon Lex at the moment, however that may change.
Web IDE For The Front End
I decided to use SAP Web IDE to develop the front end to the project, as Web IDE allows fast yet robust development of sites that can be deployed easily. Web IDE was the perfect fit for this learning projects as I could simply log into the web based IDE from any location and work on the project.
What Are Chatbots?
Make no mistake, when a person is chatting with a bot it is simply a conversation with a computer. The chatbot is an automated response system that may initiate a conversation and or may , respond to your questions. B or perform tasks.
Chatbots communicate with a person in 1 of 2 methods First is using a text based interface where the person is typing messages and reading responses, secondly, the user may use a voice based system like Siri or Alexa to communicate with the bot.
Natural language processing(NLP) is used by bots to parse the question presented by the user. The concept NLP has been around since the 1950s, and in use since the 1970s. However, only recently has NLP become mature enough for companies to put it in front of their customers.
What Can Chatbots Do?
Bots can do 2 basic functions, answer questions and perform automated tasks.
Many websites are using bots to automatically answer customer questions. For instance, a health food site may implement a bot on their site. Visitors to the site will be able to ask the bot questions, such as “Show me 5 tasty recipes with kale as the main ingredient”. The bot will use NLP to process the request, returning a relevant answer or possibly a list of recipes for the user to view.
There are an unlimited number of applications for bots, two that may become popular quickly are banking and shopping. For instance, banks can add bots to the user experience allowing users to simply ask “what is my account balance” or “how much did I spend on food last month”. For shopping, the store will have a bot that will allow you to ask questions like “when will my shoes arrive” or “order more laundry detergent”.
Where Can People Use Bots
Bots are(or will be) accessible on any channel where you interact via text or speech such as a web browser, phone app, Siro, Alexa or Google Assistant. Each channel will have the ability for you to choose which bot you would like to access, once the bot is chosen, you can they start typing to speaking.
For bots that access sensitive information , you will need to connect your channel(phone, computer etc) to the bot. For instance, if you are using Alexa on your Amazon device, you will have the ability to find the bot for your bank and connect to is using a command like “connect to the [my bank name] bot”. Alexa will then find the appropriate bot and ask go through an authentication procedure before the connection is established. One the channel is connected to your banks bot, you will be able is ask questions and initiate actions.
Creating A Shell Application With Web IDE
Here I will explain the basics of getting started with Web IDE and creating the shell application. If you don’t already have a SAP account, you will can follow these instructions to open your account. If you do have a SAP account, log in to the account. Once you have an account and are logged in follow these instructions…
1) Hover over “Developer”, then clock “SAP Web IDE”
2) Click “Sign up for free” then follow sign up instructions
3) Click “Launch SAP Web API”
4) Click “Quick Start with Layout Editor”
5) Double click the index.html file to open it for editing
6) We will not be accessing any data or sample data in this app, so we can remove the sample data section from the index.html file – highlight this script and delete it.
7) Your index.html file should now look like this
7) Add a H1 tag to identify the page – this also lets us see that the changes we are making in the html page are reflected when the project is ran
8) Run the project – Note that your web browser may be blocking the popup
9) Here we have the application running, the shell is now created for adding Chatbot functionality
At this point we have discussed what a Chatbot is and how people can interact with bots. We have also created a simple shell application that we will build the bot functionality on. In the next article, we will build the bot and integrate the bot into the shell application.
UPDATE 5/1/2018 – After looking at a bunch of bot technologies, I decided on https://recast.ai/ which is owned by SAP. I also decided to use the next article to build the bot, as the build process will require some explanation. So the next article will not be about the integration, but the bot build. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506045.12/warc/CC-MAIN-20230921210007-20230922000007-00742.warc.gz | CC-MAIN-2023-40 | 5,213 | 30 |
https://commercehero.io/tags/symfony | code | B2B E-Commerce application for the B2B server and high performance computing company transtec AG. The main challenges were the configuration process of servers consisting of multiple components with special resource provision and consumption per component and the direct integration with the internal ERP system to provide special conditions to the customers.
Magento 1 B2C Shop with ERP integration and custom product pricing based live precious metal rates.
Over the past years it has become more and more important to build systems and utilities that work with multiple frameworks. With the help of Darko Poposki I have spent a year testing out my architecture ideas in projects (sadly covered by NDAs) but now we are venturing into the public space.
The idea came to us to have a utility that can import and export entities into a common format. Because what really is a page, product or user?
We came up with the idea of having one central module that contains interfaces to define the generic service layer and then building a bridge for each framework's implementation.
For the first iteration we have build a way to import and export basic pages into and from a JSON file using a serializer. At the time of writing this is a very small set of features but with the focus being on extend-ability..
So this is it:
Some interesting facts overall it took us 15 hours to build and once we have the generic module and Magento2 module it took 10 minutes to build the Magento1 bridge.
This pre-processor parses an .xlsx file (or similar feed) with products for a Magento import, specifically Magmi. The headers can be mapped to predefined functions and variables which makes it very configurable. After that, they can be mapped back to the necessary entity's for Magmi / Magento or any other importer. It also finds and links images, modifies prices, makes couples in simple and configurable products and detects which attributes to use and so on, validate input etc. It's basically a matter of writing the extra logic you need in the designated class and map some values in yaml files. Most frequent used functions and validations are already there. The pre-processor is the result of a project that kept changing it's specs which made it complex and bloated. I recently did a complete refactor which made it pretty useful. It processes ~50k products with each 40 attributes, images, simple / configurables etc. in 5 minutes. I will expand it's features whenever i need and maybe open-source it soon.
signumgame.com/ – Magento 2 based online store with symfony based add-on
We have implemented:
– layout changes (re-styling of the whole layout)
– development of additional layout for symfony based add-on. Integration with Magento 2 theme
-frontend development for mobile view
– extensions installation and development
– performance optimization
– symgfony based custom add-on
Sign up now to add your profile to the site. Whether you're a freelancer or work for an agency or a merchant, you can find other developers to hire or get clients for yourself or for your company. | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00149.warc.gz | CC-MAIN-2020-05 | 3,086 | 18 |
https://cossan.co.uk/wiki/index.php?title=Category:Abstract_Classes_and_Interfaces&oldid=9119 | code | Category:Abstract Classes and Interfaces
An abstract class serves as a basis for a group of related subclasses. It forms the abstractions that are common to all subclasses by specifying the common properties and methods that all subclasses must implement.
The abstract class can not be instantiate.
Abstract classes are useful for describing functionality that is common to a group of classes, but requires unique implementations within each class. This approach is often called an interface because the abstract class defines the interface of each subclass without specifying the actual implementation.
The basic idea of an interface class is to specify the properties and methods that each subclass must implement without defining the actual implementation. This approach enables you to enforce a consistent interface to a group of related objects. As you add more classes in the future, the original interface remain. | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155925.8/warc/CC-MAIN-20210805130514-20210805160514-00698.warc.gz | CC-MAIN-2021-31 | 920 | 5 |
https://chauff.github.io/documents/bdp-quiz/graph.html | code | Back to HP More quizzes: Hadoop Graph Pig Streaming
All errors are my own! Though if you find any, an email would be appreciated ....
Be aware that some of these questions may not make a lot of sense outside of the taught course.
Graphs and Pregel/Giraph | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00476.warc.gz | CC-MAIN-2023-40 | 254 | 4 |
http://www.mmo-champion.com/threads/705389-Laptop-Wow-Specs-But-Runs-Like-Shit | code | Hey guys + girls,
atm im trying to work out why wow runs so badly during raids on my pc.
here is some info,
My latop was brand new in october2009 and cost just under £800 i play with the lowest settings possible, everything turned down no special features on, regarding addons i only use 13mb of addon memory. ( tried with no addons no difference noticed), yet i get insane lag in 25man raids constant lag spikes and bad performance , making it mostly unplayable :<
every site ive checked ( which is over 1000000 ) it says that my pc has above the reccomended shit needed for wow so im confused.
check this site for instance
just one of the sites, but according to that and all the others i have above whats reccomended and yet it runs like shit :<,
please does anyone have any suggestions or help regarding this? | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188213.41/warc/CC-MAIN-20170322212948-00130-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 814 | 8 |
https://blendswap.com/blend/3412 | code | I have updated these camera rigs for 2.59
-the aim control will now automatically point towards the camera
-the script to run the control panel is now an add-on so it can be installed and it stays active as long as the
here are 2 camera rigs for Blender 2.59, this is a short video to show all the controls.
they are both pretty simple and all the controls you will need are in the side panel. (press n)
One is a dolly rig and the other is a crane rig - the only difference is that the crane has an adjustable arm length and height.
They are grouped so you can just append the group to a new file - however, you will need to append the python file separately as it doesn't automatically get appended in the group.
The Dolly Rig is on Layer 1, The Crane Rig is on Layer 2
They are grouped so you can just append the group to a new file
However, there is now 2 ways to activate the UI controls
you can either append the scripts from this file separately (it won't append when you import the camera group) and then run the file - "run script".
you can install the script as an add-on into the animation section
(this is explained in the video) | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00712.warc.gz | CC-MAIN-2022-27 | 1,140 | 13 |
http://www.sqlsoldier.com/wp/sqlserver/crossdatabasequeriesbycontainedusers | code | Cross-database Queries by Contained UsersI presented to the Pacific Northwest SQL Server User Group / PASS Chapter (PNWSQL) last night on the subject of contained databases. The slide deck and demo code from my presentation, Implementing Contained Databases, can be downloaded below or will be uploaded to the User Group site in the near future, if not already. the presentation showed the basic implementation and administration tasks as well as covering the pitfalls and gotchas that you need to watch out for.
There was a question brought up last night my good friend Meher (@MeherSQL) for a scenario I had not yet tested. Since we know that contained users have no permissions external to the database, what would happen if the same contained user exists in more than 1 database on the server? Would they be able to access multiple databases without re-logging in? I postulated that the only way that would work is if the security identifiers (SIDs) were the same. If they’re not the same, it definitely won’t work. If they are the same, it may work.
I set out to test that theory today. I created 2 contained databases on one of my SQL Server 2012 instances and used the same Create user statement to create a contained user (user with password) in both databases.
-- Create contained database #1 Create Database CDTest Containment = Partial; Go -- Create contained database #2 Create Database CDTest2 Containment = Partial; Go -- Create contained user in database #1 Use CDTest; Create User TestCDUser With Password = N'12345', Default_Schema = dbo; Go -- Create contained user in database #2 Use CDTest2; Create User TestCDUser With Password = N'12345', Default_Schema = dbo; Go
Then I open a query window and log in to database CDTest as the contained user. The first thing I notice is that I can only see the database I logged into plus tempdb and master in the dropdown list. Then I run just a basic query against the other database, and I get an error stating that I cannot access the other database. In another query window, I log in to the other contained database and not that exact same behavior from the second attempt though this time, the error reported a different SID for the account.
The error statement:
Msg 916, Level 14, State 1, Line 1 The server principal "S-1-9-3-3229113722-1181156168-1455050630-361674451." is not able to access the database "CDTest2" under the current security context.
Another friend, Nic Cain (blog|@SirSQL), suggested creating the second user with the same SID as the first. I dropped the User in one of the databases and then recreated it using the same SID as the other. Note that to do this, i don’t use the SID specified in the error message. I use the binary SID stored in sys.database_principals.
Create User TestCDUser With Password = N'12345', Default_Schema = dbo, SID = 0x0105000000000009030000007A5D78C048036746864FBA56D3B68E15; Go
I run the same test as before and get the same results except this time the error messages report the same SID. No luck. On a hunch, I try enabling the TRUSTWORTHY property on both databases. I refresh my connection in both query windows. I note that the database selection dropdown still shows only the current database plus master and tempdb. However, when I attempt to query the other database in each window, I get vastly different results. With the TRUSTWORTHY option set, I am able to run cross-database queries as the same user.
I dropped the second user again and recreated it without specifying the SID. Now that the SIDs no longer matched, attempts to run cross-database queries fail with the same error as before. If you want to use contained users but need to be able to perform cross-database queries, this method will allow you to accomplish it.
As a side-note, I must remind you that using TRUSTWORTHY opens your database up to security holes, and it is recommend to not use it unless you have no other option.
Session Files From My Presentation
Implementing Contained Databases: ImplementingContainedDatabases.zip (494 KB) | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886416.17/warc/CC-MAIN-20180116105522-20180116125522-00459.warc.gz | CC-MAIN-2018-05 | 4,037 | 14 |
https://www.ae.be/blog/combining-the-power-of-r-and-d3-js/ | code | According to wikipedia, the amount of unstructured data might account for more than 70%-80% of all data in organisations. Because everyone wants to find hidden treasures in these mountains of information, new tools for processing, analyzing and visualizing data are being developed continually.
When using R for data processing, there are a couple of options to produce graphics within R itself. One of them is to use the R package called 'ggplot2'. This package makes it easy to turn data into beautiful charts. Consider the following chart, produced with ggplot.
As you can see, there is a lot of data on this plot, which makes it difficult to see what the values are at a specific point in time. Zooming in is not an option, because it's a static png-image. If we want more detail for a more specific time period to fit on the chart, we have to run our ggplot-scripts again with a smaller data set, which is not a flexible way of visualizing data.
Let's add some interactivity
To combine D3 with R, again you have a couple of options. It depends on whether
First plotting, then binding
There is a solution to this problem, and that is Plotly.
Plotly is built on D3 and they have done all that binding work for you. They offer multiple API's that not only work with R, but also with Python, Matlab, NodeJS and Excel. They also have an API especially for ggplot users, which makes it easy to extend our previous example. It works by uploading your ggplot (which also contains your data) to a repository on their servers. Afterwards, all D3 binding is done and you get a fully interactive plot that you can embed in any webpage:
First binding, then plotting
To illustrate this, we start from scratch with a new example. We will be going through the 3 basic steps in data science.
- Get the data
- Clean the data
- Visualize the data
Looking at the html-page which contains all the info, we see that each row in the table has an author, a title, multiple tags, multiple categories, and a publication date. We can identify relationships between these entities. A good way to visualize relational data with D3 is the D3 bundle layout. If we want to use this type of visualization, we need to know in what format our data should be. We can see in the D3 code example where the data comes from: a JSON-file, which contains all relations between different elements, grouped by the type of element. Armed with this knowledge, we can start evaluating the 3 basic steps.
Step 1: Get the data
We get the data by scraping the html page which contains an overview of all blog posts. The blog-data in this webpage is structured in an html-table. R has packages which enable you to easily scrape the data from such a table. First we save this page as a static html page, so we can parse it more easily.
The code looks something like this.
# read all html table elements
raw <- readHTMLTable("WordPress.html")
# ours is the first of two tables
# in the html document
data <- raw[]
Step 2: Clean the data
Step 1 is done. We got our data. Next up is cleaning the data and storing it in the right format. We can determine the 'right format' by looking at the D3 code example. A JSON-file is used as data-input for the visualisation. This JSON-file should contain all relations, for each single element. The end result should look something like this:
name: "Title.But do you love it?",
To achieve this result, we can use R to reorganize the data. R has some packages that can help us achieve this. For example, the package 'reshape' helps to reorganize tabular data, or the package 'RJSONIO' which serializes R objects to JSON. After some more R magic, the data is cleaned and in the right format.
Step 3: Visualizing the data
This JSON-file, containing all data, is accessed by D3 as follows:
Simply plugging this data into this code example gives us the final end-result: a fully interactive D3 graphic (screenshot below). Move the mouse over the text to see all relations among the different entities.
There is still much more to be said when it comes to integrating R and D3. This post just scratches the surface. Projects like rCharts and clickme or visualizing ggplots with Shiny and D3 are all different approaches to combining R and D3. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296820065.92/warc/CC-MAIN-20240425000826-20240425030826-00006.warc.gz | CC-MAIN-2024-18 | 4,226 | 30 |
https://sourceforge.net/p/notepad-plus/discussion/331754/thread/0e76e9b5/ | code | For reasons that I won't go into, I need to do the following
I have a long list of references (numbers and letters)
I simply need to add a tab break after each one and insert the same reference to give me
What would be the best way to do this please?
Sorry but I'm not very good with Notepad ++. I have read the manual a long time ago but I forget things and don't have too much time at work.So I prefer the forum
Ah, the power of regular expressions. You can do a Find/Replace in regular expression mode. Use ^(\w+)$ for the "Find" textbox, and \1\t\1 in the "Replace" textbox.
For a little explanation of ^(\w+)$. The ^ character says find the beginning of a line. Everything in the parenthesis is a group (this being group 1 since it is the first and only group in the expression). \w is any word character, that is A-Z, 0-9, and underscore. The + says find 1 or more consecutive word characters, and finally the $ is the end of a line.
Using \1\t\1 says replace it with group 1, followed by a tab, and then group one again.
It worked first time and I was also able to fine my expressions beginning and ending with " and even those in brackets by changing the regex slightly (i'm not good at regexes but I'm getting better)
Hi, Safe Tex,
The Search/Replacement and the explanations, proposed by dail8859, are quite correct and useful, but it's still possible to simplify the regex, in the SEARCH part !
Just use .+ in the SEARCH text box and $0\t$0 in the REPLACEMENT text box
.+ represents all the contents of any NON empty line
$0 represents the totality of the SEARCH string
Of course, it's the "initial" example and I know, as you said in your last post, that you "customized" your regex afterwards, to suit your proper needs !
You will find good documentation, about the new Perl Common Regular Expressions (PCRE), used by N++, since the 6.0 version, at the TWO addresses below :
The FIRST link concerns the syntax of regular expressions in SEARCH
The SECOND link concerns the syntax of regular expressions in REPLACEMENT | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646952.38/warc/CC-MAIN-20180319140246-20180319160246-00760.warc.gz | CC-MAIN-2018-13 | 2,029 | 18 |
https://community.intel.com/t5/Intel-Fortran-Compiler/ICE-for-merge-with-strings/m-p/1207204 | code | The following code (misuse of merge as a ternary operator) results in an ICE with ifort version 22.214.171.124 unless compiled with -O0
character(len=*), parameter :: str = 'abcde'
print*, index(str,merge('a','d',str(4:4) == 'd'))
end program test
For more complete information about compiler optimizations, see our Optimization Notice. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00259.warc.gz | CC-MAIN-2021-43 | 336 | 5 |
https://jp.mathworks.com/matlabcentral/answers/1638180-elementwise-division-in-matrix-notation?s_tid=prof_contriblnk | code | Elementwise division in matrix notation
32 ビュー (過去 30 日間)
I am trying to write the following code in matrix notation:
A = B. / C;
A, B and C are column vectors with 1435 rows.
I have already managed to do this with multiplication, i.e.:
D = E.*F;
This is equivalent to
D = diag(F)*E;
Also in this case D, E and F are also column vectors with 1435 rows.
I want to do this because it is more foolproof and does not give results if the dimensions do not match.
その他の回答 (1 件)
Image Analyst 2022 年 1 月 28 日
編集済み: Image Analyst 2022 年 1 月 28 日
Try getting rid of the space between the dot and the slash:
A = B ./ C;
If the length of B and C don't match, you might want to consider if you even want to divide them. Like, WHY don't they match? If one is shorter to you want to just assign the "left over" elements to something specific, like 0 or 1 or B or C or something? | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00237.warc.gz | CC-MAIN-2022-33 | 911 | 17 |
https://docs.bentley.com/LiveContent/web/STAAD.Pro%20Help-v11/en/RC_design_options_slab.html | code | D. (Slab) Design Options dialog
Used to initiate the design of all regions as per design brief of the slab and local axis of each region. Displays all the regions in the active Slab.
Opens when the Concrete Slab | Design page is selected.
Note: All design moments are resolved with respect to local axis of each region before reinforcement calculation proceeds.
|Regions||Displays the slab regions which will be included in the pending design.|
|OK||Closes the dialog.|
|Cancel||Closes the dialog.|
|Design||Initiates a slab design and closes the dialog.|
|Help||Opens the STAAD.Pro help window.| | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735823.29/warc/CC-MAIN-20200803170210-20200803200210-00164.warc.gz | CC-MAIN-2020-34 | 596 | 9 |
https://www.mattpolisson.com/working-papers.html | code | Recent Working Papers
Revealed Preferences over Risk and Uncertainty (with John K.-H. Quah and Ludovic Renou), 2019.
- Short Abstract: Develops and implements a nonparametric procedure, called the lattice method, for testing the consistency of contingent consumption data with a broad class of models of choice under risk and under uncertainty.
- Working Papers: St Andrews 2019, QMUL 2017, IFS 2015, Oxford 2015, Leicester 2013.
- Slides: Lattice Method.
A Lattice Test for Additive Separability, 2018.
- Short Abstract: Establishes necessary and sufficient conditions for a finite data set of price and demand observations to be consistent with an additively separable preference, with an empirical application to panel data on food purchases.
- Working Papers: IFS 2018, St Andrews 2018.
- Slides: Additive Separability.
Older Working Papers
Demand Analysis with Partially Observed Prices (with Ian Crawford), 2016. | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00080.warc.gz | CC-MAIN-2019-47 | 918 | 11 |
http://www.bcsportbikes.com/forum/showthread.php/97361-which-DRZ-seems-like-the-best-deal | code | so I'm trying to make a decision but not too sure which one i want to go for. heres the deal:
1) 2006 Yellow DRZ for 5500
-oil changes every 5000kms done by suzuki dealership
2)2006 black DRZ for 5500
-LED rear signals and brake light
-needs a new rear tire
-front fork seals are leaking oil (leaks already for a 2006?)
which one would you guys choose? and why.
any help would be great! | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719139.8/warc/CC-MAIN-20161020183839-00455-ip-10-171-6-4.ec2.internal.warc.gz | CC-MAIN-2016-44 | 386 | 9 |
https://qode.ticksy.com/article/9420/?print | code | When it comes to importing demo content, before you hit import, disable all 3rd party plugins (to free up server resources), then go to import, open chrome developer tools, hit Import, and watch console for errors. When some error occur refresh import page and repeat the process. Importing demo content must reach 100% without displaying error in the console. So repeat it until console is clear (sometimes you have to repeat process for 3-4 times or maybe more).
Please understand that this is common issue when importing demo with all theme authors since demos become more robust so they can satisfy needs of all customers, so issue is usually triggered by lower server resources. Usually it is due to low WordPress memory limit https://codex.wordpress.org/Editing_wp-config.php#Increasing_memory_allocated_to_PHP and also due to low PHP directives, so it would be good if you can increase these values or ask your hosting service to do that for you:
max_execution_time = 300;
memory_limit = 128M;
upload_max_filesize = 64M;
post_max_size = 256M;
Helpful reading -
Also check instructions regarding import in theme documentation http://bridge.qodeinteractive.com/documentation/3-demo-content/3-1-qode-import/
If the progress bar doesn't reach 100%, you can finish the import via XML.
To so so, navigate to Tools > Import > WordPress > Run Importer. Upload the theme-export.xml file, which you can find in the file you downloaded from ThemeForest, in the folder "xml export." If the connection runs out, simply upload it again and again, until you receive the message that import has completed.
Next, in Theme Options > Import, run the import, but ONLY for Widgets and for Options. There is no need to run it for Content again.
Here is our Video tutorial for demo import that you should check https://www.youtube.com/watch?v=r00Dj0azAkc&list=PLNypD600o6nLFmxDfvLXYbH6LjWSoAhaV&index=5&t=0s
If you continue to have problems with import, you can always submit a ticket to our support and they will try to import demo content for you. Just provide WordPress admin access in your ticket:
Website login URL | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889574.66/warc/CC-MAIN-20201025154704-20201025184704-00273.warc.gz | CC-MAIN-2020-45 | 2,103 | 14 |
https://www.cbsnews.com/news/why-you-should-sit-on-a-ball-at-work/ | code | A few months agoafter I decided that assembling a treadmill desk was beyond the scope of what I was willing to do to get exercise in the officeI decided to switch out my office chair for an exercise ball.
I had heard good reports from people who had tried it, so I went on Amazon and ponied up $15 for an anti-burst ball. (I got the 75cm one because I'm a big guy, but most people are good with 65 cm, I believe.)
Here's what I've found since it arrivedand keep in mind that I work from home so some of the following may be frowned upon at your workplace:
- While some people claim that their backs get sore from sitting on the ball all day, I've found the exact opposite to be true. I have much less back pain in general. Perhaps it's because the ball makes me sit with a better posture now.
- I'm constantly moving around ever-so-slightly, which I'm told is good for my core muscles. Mainly, it keeps me from falling asleep, and I have noticed that I'm a bit more energetic in general.
- If I'm bored, I can lift my legs up and try to balance for a while. I'm getting pretty good at it; I now only fall off the ball once a day or so.
- I'm much more likely to do other exercises in my office now that I have a platform for doing themI even printed out a list of exercises you can do in a small space with an exercise ball.
Here are some minor annoyances (that may be specific to my situation):
- I have to repump it every week, which is exercise in and of itself.
- The hand pump that came with it is basically useless. I simply took the nozzle from that pump and attached it to a functional one.
- It sometimes wanders off, which my old desk chair never seemed to do.
Overall, though, it's been pretty great, and I highly recommend getting one if you can. It's cheap, fun, and motivating. | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145533.1/warc/CC-MAIN-20200221142006-20200221172006-00030.warc.gz | CC-MAIN-2020-10 | 1,791 | 12 |
https://www.systemcenterblog.nl/tag/managed-browser/ | code | It took me a while to figure out the Web Clip/Aweb App story on Android devices, on iOS it is easy, you simple get a shortcut/icon pushed to the start screen of the device. For Android you need to add the Company Portal Widget to the start screen of an Android device. Let’s see how […]READ MORE
Now able to force Web Clips to use Intune Managed Browser
A big wish of a lot of customers came true after the update of Microsoft Intune earlier this week. As from now you are able to force Web Clips (URLS) to use the Intune Managed Browser on Android and iOS devices. So this way you are able to configure corporate websites to use the Managed Browser, […]READ MORE
How to force the usage of the Managed Browser with Microsoft Intune
In my last blog I pointed out the Microsoft Intune Managed Browser, this is pretty cool. But only if users are forced to use this managed browser instead of any other browser that can be installed. What we can do is the following, disabling the Allow web browser option and optionally the Allow Application store […]READ MORE
Microsoft Intune Managed Browser now available in the iOS store
We have waited for a long time before that the Microsoft Intune Managed Browser was released for iOS, apparently Apple finally approved the application and it is now available in the iTunes app store. With the Microsoft Intune Managed Browser you are able to manage which websites can be browsed to via the managed browser […]READ MORE
Subscribe to my YouTube channel!
About Peter Daalmans
Peter tries to speak every year on several events like TechDays Netherlands, ExpertsLive, IT/Dev Connections, BriForum, Midwest Management Summit, TechEd Australia, TechEd New Zealand and in 2017 Peter had the honor to speak at Microsoft Ignite. See more here.
Author of four books about Configurtion Manager and Microsoft Enterprise Mobility +Security | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00138.warc.gz | CC-MAIN-2023-06 | 1,874 | 11 |
https://www.finalsemprojects.com/identification-spiders-crawlers/ | code | Identification of Spiders and Crawlers
Identification of Spiders and Crawlers:
Spiders are small web programs that harvest information for search engines. These spiders tracks the websites. In some ways these are good by quickly showing up the websites. These programs follow certain links on the web and gather information. You can also explicitly instruct a robot not to follow any of the links on the page. Like the good spiders, bad spiders are also present known as spam spiders. These bad spiders try to harvest your email address. Some spiders may not work efficiently and goes in endless loops which are built by dynamically created webpages. So in this project we try to identify the bad spam spiders present in the webpages and try to eradicate them. And also we minimize the bot traffic. This idea was firstly proposed by Google namely Google Analytics.
Implementation Steps done:
Software used are Java+Hadoop+Hive (NoSQL database -Hive is used)
The given dataset (Google bot-spider) is analyzed for bot identification
The data is uploaded to Hadoop HDFS system
The location of file is stored under hdfs/app/hadoop
The file name is given web_log
We have to start the hadoop server first.
Then we can check the hadoop is running or not.
Upload the web_log file to hive database
We created server_log partition in hive, where are data are stored
Start Analysis, in which the dataset in hive is analyzed for bot detetction
Finally results of bot under different browser is taken and plotted as graph.
This show how many bot urls are detected in the web log
Tools used: Hive, Hadoop, Java | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00336.warc.gz | CC-MAIN-2023-50 | 1,596 | 17 |
http://informationmadness.com/index.php?option=com_content&view=article&id=834:launching-job-board-on-informationmadnesscom&catid=16:annoucements&Itemid=165 | code | Hey everybody, I have finally opened up a job board on Information Madness. Any company or recruiter can post a job here without registering to the site. Once the job is posted, it will go through the posting queue and will go live once approved.
Users can search for a job and can get the details for the job as well as can apply for the job. No one has to register to the site to post a job or search a job. Its all FREE. so Enjoy and good luck with your job hunting (if you are looking for one). | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00543-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 498 | 2 |
http://allnurses.com/nursing-in-canada/nurse-jobs-canada-297230-page7.html | code | i do not know why i have to sit this whole SEC exam thing but my only guess is that probabbly what they found on my CV and app forms is a "lack of number of hours" it dealing with mental health patients. that's all i can gather.
i was told by GH that all my forms are in the Canada High Commision and that i will hear from them in 4-6 weeks. i know i'm almost there and i'm just biting my tounge and trying to be patient. so far so good and i havn't heard of any complaints about me or any issues, etc. so ill just keep my practice safe i suppose.
my parents kept asking me why Canada and not the U.S. or Australia. i always tell them that an opportunity came to me and not because "i'm not happy with my ward". in a way yes i am not happy but i want to try and look on the bigger picture about my future and my wife. (were getting married this march
are you in Canada now Cocoy2go? how are you? and how is it there? | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163785316/warc/CC-MAIN-20131204132945-00077-ip-10-33-133-15.ec2.internal.warc.gz | CC-MAIN-2013-48 | 916 | 4 |
https://sourceforge.net/directory/natlanguage%3Achinesetraditional/environment%3Awin32/?page=10 | code | Thousand Eyes provides end-to-end visibility across any network segment, into any service and from all types of connected devices. Quickly and precisely pinpoint the root cause of problems — and then share your insights with your vendors and customers. Respond to issues before they impact customers, services and revenue — and ensure your business runs smoothly.Sponsored Listing
OSI-Approved Open Source (249)
- GNU General Public License version 2.0 (154)
- GNU General Public License version 3.0 (31)
- GNU Library or Lesser General Public License version 2.0 (25)
- BSD License (11)
- Mozilla Public License 1.1 (11)
- MIT License (8)
- Affero GNU Public License (6)
- Apache License V2.0 (6)
- GNU Library or Lesser General Public License version 3.0 (6)
- Apache Software License (4)
- Common Development and Distribution License (4)
- Academic Free License (3)
- Common Public License 1.0 (3)
- PHP License (3)
- Python Software Foundation License (3)
- Other License (10)
- Public Domain (9)
- Creative Commons Attribution License (5)
- Windows (281)
Grouping and Descriptive Categories (240)
- All 32-bit MS Windows (150)
- 32-bit MS Windows (105)
- All POSIX (62)
- 64-bit MS Windows (55)
- 32-bit MS Windows (34)
- All BSD Platforms (23)
- OS Independent (21)
- OS Portable (17)
- Classic 8-bit Operating Systems (1)
- Project is OS Distribution-Specific (1)
- Project is an Operating System Distribution (1)
- Project is an Operating System Kernel (1)
- Modern (129)
- Linux (107)
- Mac (73)
- BSD (64)
- Other Operating Systems (48)
- Audio & Video
- Business & Enterprise
- Home & Education
- Science & Engineering
- Security & Utilities
- System Administration
This is a project about making a MUD in JAVA. I'm trying to create some new methods coding with MUD. I'm a Taiwanese, so most of docs would be written in Trad.Chinese.
Internet Control Firewall Intrusion Detection and Logger for Inbound and Outbound Traffic. Watches Files and Directories, Drive and Memory Protection.
NiceSearch is a small, fast and powerful full-text search Tool for software developers. NiceSearch is different to other full-text search tool. it will save a history file after search complete, and search these history files when user search again.
VoipReview lets you compare hundreds of business VoIP service providers so that you can cut business costs. Use the VoIP Savings Tool to view your savings by switching to VoIP instantly.Sponsored Listing
Not Only Yet Another LaTeX Front-End (Noyaltex) is a LaTeX GUI front-end that targets on high-level functionalities for composing extremely long and complex documents. It works on Windows and Linux operating systems.
OpenCpp is a new IDE which runs on Microsoft Windows and GTK+. This project gives developers a more convenient way to write, compile and debug their C++ programs.
It is a open source web oa system, includeing workflow engine and forms desinger. provide document managment/report/calender/message/email etc.
The project is designed to crawl the scholar information in the Internet to help the researchers organize the reference works.
Personal Knowlege Discoverer We are developing a software called Personal Knowlege Discoverer. which monitors all user\\\'s behaviors about read documents and help user to discovery and analysis their personal knowlege.
Prezen is a software to help presenters to do their presentation better and easier. Prezen helps presenter to control the presentation flow with a mobile devices,i.e. Pocket PC, handheld or Smart Devices.
SaYa Project is a RPG game ,that currently based-on the story DRAGON BALL Z.This Rpg Game is being dev ,if anyone that are interest in RmXP or RGSS ,RM2K3 ,ext,you are welcome to join us,please inform me:[email protected]
SafeMSN can protect your MSN communication against sniffer software to watch your activity on network or Internet. It can let you talk to your friend very safety on MSN.
Smart Catonese Jyutping Input Method is an open source Cantonese input method based on Jyutping. It is an easy-to-use and sentence-based Cantonese input method, like Microsoft Pinyin and Microsoft Hong Kong Cantonese input method.
Somatotype, evaluation and calculation of S. and database for save all evaluation in an Excelsheet!
we are going to build one big supercomputer though cloud computing sof
A project in which artifacts of practical code are accumulated & shared within developers, for purpose of quick learning/reuse or quick memory-recovery. It defined common interface for developers to integrate their different-purposed code together easily.
Auto-changer desktop wallpaper is minimalistic multilingual interface, support any graphical formats and definitions of pictures and monitor. No installation
山寨版 Disney Junior
TrinkaFive is a simple yet sophisticated bookkeeping and project managment software for all sizes of businesses. Included (but not part of its main intent) is a tarot magick utility for casting prosperity charms and sigils. Currently being programmed.
Very simple programme using the knowledge of C Allow user to play around with multiplication and addition.
VirtualNDS is another Nintendo DS emulator. Its target is fully compatible with NDS hardware.
Levin Project is a full Business & personal financial software Group.
WCell is not using this repository anymore - Please refer to our homepage @ http://www.wcell.org and check out our new repository on Github @ http://github.com/WCell/WCell
WebRocks - Web Application Rocks Rapidly | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542412.97/warc/CC-MAIN-20161202170902-00375-ip-10-31-129-80.ec2.internal.warc.gz | CC-MAIN-2016-50 | 5,510 | 69 |
https://sparta.github.io/doc/run.html | code | run N keyword values ...
upto value = none start value = N1 N1 = timestep at which 1st run started stop value = N2 N2 = timestep at which last run will end pre value = no or yes post value = no or yes every values = M c1 c2 ... M = break the run into M-timestep segments and invoke one or more commands between each segment c1,c2,...,cN = one or more SPARTA commands, each enclosed in quotes c1 = NULL means no command will be invoked
run 10000 run 1000000 upto run 100 start 0 stop 1000 run 1000 pre no post yes run 100000 start 0 stop 1000000 every 1000 "print 'Temp = $t'" run 100000 every 1000 NULL
Run or continue a simulation for a specified number of timesteps.
A value of N = 0 is acceptable; only the statistics of the system are computed and printed without taking a timestep.
The upto keyword means to perform a run starting at the current timestep up to the specified timestep. E.g. if the current timestep is 10,000 and "run 100000 upto" is used, then an additional 90,000 timesteps will be run. This can be useful for very long runs on a machine that allocates chunks of time and terminate your job when time is exceeded. If you need to restart your script multiple times (reading in the last restart file), you can keep restarting your script with the same run command until the simulation finally completes.
The start or stop keywords can be used if multiple runs are being performed and you want a variable or fix command that changes some value over time (e.g. target temperature) to make the change across the entire set of runs and not just a single run.
For example, consider these commands followed by 10 run commands:
variable myTemp equal ramp(300,500) surf_collide 1 diffuse v_myTemp 0.5 run 1000 start 0 stop 10000 run 1000 start 0 stop 10000 ... run 1000 start 0 stop 10000
The ramp() function in the variable and its use in the "surf_collide" command will ramp the target temperature from 300 to 500 during a run. If the run commands did not have the start/stop keywords (just "run 1000"), then the temperature would ramp from 300 to 500 during the 1000 steps of each run. With the start/stop keywords, the ramping takes place smoothly over the 10000 steps of all the runs together.
The pre and post keywords can be used to streamline the setup, clean-up, and associated output to the screen that happens before and after a run. This can be useful if you wish to do many short runs in succession (e.g. SPARTA is being called as a library which is doing other computations between successive short SPARTA runs).
By default (pre and post = yes), SPARTA zeroes statistical counts before every run and initializes other fixes and computes as needed. And after every run it gathers and prints timings statistics. If a run is just a continuation of a previous run (i.e. no settings are changed), the initial computation is not necessary. So if pre is specified as "no" then the initial setup is skipped, except for printing statistical info. Note that if pre is set to "no" for the very 1st run SPARTA performs, then it is overridden, since the initial setup computations must be done.
IMPORTANT NOTE: If your input script changes settings between 2 runs (e.g. adds a fix or compute), then the initial setup must be performed. SPARTA does not check for this, but it would be an error to use the pre no option in this case.
If post is specified as "no", the full timing and statistical output is skipped; only a one-line summary timing is printed.
The every keyword provides a means of breaking a SPARTA run into a series of shorter runs. Optionally, one or more SPARTA commands (c1, c2, ..., cN) will be executed in between the short runs. If used, the every keyword must be the last keyword, since it has a variable number of arguments. Each of the trailing arguments is a single SPARTA command, and each command should be enclosed in quotes, so that the entire command will be treated as a single argument. This will also prevent any variables in the command from being evaluated until it is executed multiple times during the run. Note that if a command itself needs one of its arguments quoted (e.g. the print command), then you can use a combination of single and double quotes, as in the example above or below.
The every keyword is a means to avoid listing a long series of runs and interleaving commands in your input script. For example, a print command could be invoked or a fix could be redefined, e.g. to reset a load balancing parameter. Or this could be useful for invoking a command you have added to SPARTA that wraps some other code (e.g. as a library) to perform a computation periodically during a long SPARTA run. See Section 8 of the manual for info about how to add new commands to SPARTA. See Section 6.7 of the manual for ideas about how to couple SPARTA to other codes.
With the every option, N total steps are simulated, in shorter runs of M steps each. After each M-length run, the specified commands are invoked. If only a single command is specified as NULL, then no command is invoked. Thus these lines:
compute t temp variable myT equal c_t run 6000 every 2000 "print 'Temp = $myT'"
are the equivalent of:
compute t temp variable myT equal c_t run 2000 print "Temp = $myT" run 2000 print "Temp = $myT" run 2000 print "Temp = $myT"
which does 3 runs of 2000 steps and prints the x-coordinate of a particular atom between runs. Note that the variable "$q" will be evaluated afresh each time the print command is executed.
Note that by using the line continuation character "&", the run every command can be spread across many lines, though it is still a single command:
run 100000 every 1000 & "print 'Minimum value = $a'" & "print 'Maximum value = $b'" & "print 'Temp = $c'"
If the pre and post options are set to "no" when used with the every keyword, then the 1st run will do the full setup and the last run will print the full timing summary, but these operations will be skipped for intermediate runs.
IMPORTANT NOTE: You might hope to specify a command that exits the run by jumping out of the loop, e.g.
compute t temp variable T equal c_t run 10000 every 100 "if '$T < 300.0' then 'jump SELF afterrun'"
Unfortunately this will not currently work. The run command simply executes each command one at a time each time it pauses, then continues the run. You can replace the jump command with a simple quit command and cause SPARTA to exit during the middle of a run when the condition is met.
The number of specified timesteps N must fit in a signed 32-bit integer, so you are limited to slightly more than 2 billion steps (2^31) in a single run. However, you can perform successive runs to run a simulation for any number of steps (ok, up to 2^63 steps).
Related commands: none
The option defaults are start = the current timestep, stop = current timestep + N, pre = yes, and post = yes. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.23/warc/CC-MAIN-20231203125921-20231203155921-00638.warc.gz | CC-MAIN-2023-50 | 6,844 | 30 |
https://grocker.readthedocs.io/en/latest/troubleshooting.html | code | I have a PermissionError: [Errno 13] Permission denied¶
Grocker does not ask for superuser rights before invoking Docker. The Docker socket has to be readable and writable by the current user. Many distributions create this socket to be readable and writable by root and the docker group.
One way to be able to use Grocker is to add your user to the docker group. On Debian:
$ sudo adduser $(whoami) docker $ su $(whoami) # reload groups, you should also restart your session | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00101.warc.gz | CC-MAIN-2020-24 | 476 | 4 |
https://community.esri.com/t5/arcgis-pro-questions/why-create-replica-gp-in-arcgis-pro-is-terribly-slower-than/td-p/677319 | code | I have workflow designed over the Create Replica and Synchronize changes in my client place. The same function that is Create Replica of a dataset for a given geometry(Spatial Filter) in ArcMap runs faster about 300%.
I have requested for checkout the datasets about 5 featureclasses in it and around 200-300 features expected to checkout it runs in Arcmap within 3 minutes over a VPN network. The same dataset for the same geometry filter it runs about 11 minutes in ArcGIS Pro. Is there any reports over the Create Replica slowness to ESRI? Did anyone else reported over these problems?
I am using Oracle11G as geodatabase 10.3.1 Setup (user schema gdb setup).
Arcmap has delivered nicely on the checkout and checkin but in ArcGIS pro 2.6 it is terribly slower and client is unhappy with the slowness after decided to move on to ArcGIS Pro. | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039379601.74/warc/CC-MAIN-20210420060507-20210420090507-00594.warc.gz | CC-MAIN-2021-17 | 842 | 4 |
https://community.spiceworks.com/topic/362907-pulling-attachments-out-of-exchange | code | What i am trying to do is pull files out of an Exchange server (one email account) and have them put into a network share for a few people to view. Anyone have any ideas on this?
Delegate yourself full rights to that mailbox. Open in it in your outlook or OWA. Download the files and put them in a network share
I would just create a shared mailbox, but if you're good with VB you can try this
Here is a 3rd party solutions as well | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743854.48/warc/CC-MAIN-20181117205946-20181117231946-00244.warc.gz | CC-MAIN-2018-47 | 431 | 4 |
http://atkinsondev.com/blog/vue-1-upgrade-2/ | code | To test out Vue, I built a small test app in Vue 1.0. Recently Vue released version 2.0 and I set about upgrading the small app. Upgrading a full version can be a big chunk of work in some frameworks, but from the docs it looked like the Vue API was relatively stable between versions.
The Vue team created a couple handy resources to help ease the migration from 1.0 to 2.0. First off, they created a thorough migration guide in the docs. They also made a script to check your app code for deprecated or changed coding patterns. When I ran that script on my small app it didn’t find any issues, but the authors do warn that it won’t find every possible area in the code that needs to be changed.
One change the script missed in my app was that now component templates now can only have one root element. That is - you can only have one top-level element under the template tag. This change isn’t a big deal - it exposed that one of my components was probably too big anyway - and it was trivial to refactor that component into two smaller components.
When I originally created my app I used the Vue command line tool to create a scaffolded Vue app with Webpack. To make sure I had updated versions of Node modules and Vue scripts, I created a fresh app using the Vue 2.0 template. Then I used a folder diff tool to quickly find the package.json and script differences between the two. Thankfully, there were almost no changes in the script and build files and only a few module and version changes in package.json to make.
Last but not least, I ran the unit and functional tests in may app to verify the app functionality held up during the upgrade and they passed with flying colors. Overall I really appreciate the work the Vue authors put into keeping the framework very stable across releases and creating thorough migration documentation and handy tooling! | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511206.38/warc/CC-MAIN-20181017174543-20181017200043-00417.warc.gz | CC-MAIN-2018-43 | 1,868 | 5 |
https://www.nebumind.com/tag/3dprinting/ | code | The weekly newspaper VDI Nachrichten has published an article on the nebumind software and its applications and benefits in 3D printing. The full article can be found in the weekly newspaper VDI Nachrichten, issue #45. You can read a summary of the article here.
We have performed another integration of our software into an EOS machine used for metal 3D printing together with Ariane Group. Our software can now be easily connected to EOS 3D printers and collect and visualize data as “digital product twins” during manufacture.
ArianeGroup chooses nebumind’s software to generate “digital product twins” during their metal 3D printing process in order to analyse and improve the quality of their printed components. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510368.33/warc/CC-MAIN-20230928063033-20230928093033-00663.warc.gz | CC-MAIN-2023-40 | 727 | 3 |
https://lists.oasis-open.org/archives/xliff/201310/msg00083.html | code | Subject: pre-beta version of the xliffRoundTrip Tool for XLIFF 2.0
I’ve posted my pre-beta version of the xliffRoundTrip Tool for XLIFF 2.0.
You should be able to transform any well-formed XML file to an XLIFF 2.0 file; translate the XLIFF file; and transform it back to its original XML format.
If you’d like, please give it a spin and give me your feedback.
This tool is very vanilla at the moment. But as I gather feedback I intend to make it more robust.
Here are some known limitations that I plan to fix in the very near future:
- XML documents with XML namespaces are not yet tested (and not certified to be supported yet)
- Segmentation has not yet been tested (and not certified to be supported yet)
- (more will limitations will be documented and fixed as I discover them) | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00035.warc.gz | CC-MAIN-2019-35 | 786 | 9 |
https://discuss.pytorch.org/t/learning-rate-modulation-and-gradient-descent/133695 | code | I was trying to implement a model these last few days and I stumbled across problem that I can not solve:
Here is the newtork (simplified)
The blue part has its
requires_grad set to False, and I manually update the weight as:
self.pred.weight.data = self.pred.weight.data + 0.1 * modulation * deltaW[0,0].T
modulation = torch.mean(out_modulation).
To make it clear I run a simulation of N steps and every time step I update the weights manually of the online layer.
After all these steps I want to minimize the overall error by doing a backprop (autograd) over the red area. where the error is the sum of the prediction error in blue area (i.e. the error does not explicitly depend on the modulation population).
The problem is since the error does not depend explicilty of the red area, pytorch can not estimate a gradient. Is there a way to make the graph understand that the manual updates is dependent of the modulation population ?
full forward code:
` def forward(self, previous, observation,hidden_modulation):
prediction = self.pred(previous) #linear unit prediction_error = observation - prediction out_modulation, hidden_modulation = self.surprise(torch.abs(prediction_error), hidden_modulation) #LSTM modulation = torch.mean(torch.sigmoid(out_modulation)) deltaW = ( torch.einsum( "bij,bik->bijk", previous, prediction_error)) self.pred.weight.data = self.pred.weight.data + modulation * deltaW[0,0].T #online update self.pred.weight.data[self.pred.weight.data < 0] = 0 return prediction_error, hidden_pred, hidden_surprise,torch.relu(prediction),modulation` | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00553.warc.gz | CC-MAIN-2022-40 | 1,569 | 12 |
http://www.vistaheads.com/forums/microsoft-public-internetexplorer-general/345244-ie-8-rc1.html | code | Leonard Grey wrote:
> Internet Explorer 8 has been released. Do not install a beta version.
> Leonard Grey
> Errare humanum est
> titus12 wrote:
>> Is IE8 RC1 OK to install as a final install for IE? IE7 is working fine
>> on my XP Home SP3 system.
>> Thank you,
The news articles said it was to go non-beta on the 18th (or maybe
yesterday). You saw it show up in Windows Updates page? I see a link
IE's home page, but that's always said that even through all the release
candidates for the beta version. The "Get It Now" link takes me to IE7
From its home page for Internet Explorer, the download link for IE8 goes
through atdmt.com, a known advertising revenue service provider. That
gets blocked by many ad-blockers, including mine. Microsoft uses them
to track from where the downloads are requested and other statistical
data. After disabling the ad blocking, I get "IE cannot display page"
which could be a server problem (perhaps with atdmt.com). Alas, the
redirect through atdmt.com doesn't provide the destination page as
parameters in the URL (so I could see where atdmt.com would go to).
I also didn't find it listed at www.microsoft.com/downloads
. So is it
really released? Until they add it to their Windows Update site, I
won't bother to download it - and only after imaging my OS partition for
a true restore (and will probably still use Returnil to trial it for
awhile to determine compatability). | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543030.95/warc/CC-MAIN-20161202170903-00142-ip-10-31-129-80.ec2.internal.warc.gz | CC-MAIN-2016-50 | 1,414 | 26 |
https://www.labellerr.com/blog/hands-on-with-driver-activity-monitoring-system-for-autonomous-driving/ | code | Driver monitoring systems are increasingly important to make on-road driving safer. To do this, the car must be able to monitor the driver's behavior and intervene if necessary to ensure the safety of everyone on the road.
One of the biggest challenges in developing driver monitoring systems is the complexity of human behavior. Drivers can exhibit a wide range of behaviors, and the system must be able to distinguish between normal driving behaviors and potentially dangerous ones.
Additionally, the system must handle changing conditions, such as lighting, weather, and road conditions. This requires using advanced computer vision and machine learning algorithms to detect and accurately classify the driver's behavior in real-time. By solving these challenges, driver monitoring systems can enable safe and reliable autonomous driving.
Figure: Driver monitoring system
According to a report by MarketsandMarkets, the global driver monitoring system market is expected to grow from $717 million in 2020 to $1,740 million by 2025 at a compound annual growth rate (CAGR) of 19.9% during the forecast period. This growth is driven by the increasing demand for advanced driver assistance systems (ADAS) and the growing road safety awareness.
The problem of driver distraction and fatigue has been estimated to cost the US economy $160 billion annually, according to a report by the National Safety Council. This cost includes medical expenses, property damage, and lost productivity due to accidents caused by distracted or fatigued drivers. Furthermore, the problem of driver distraction and fatigue leads to thousands of fatalities and injuries yearly, which has a significant emotional and societal impact.
Computer vision plays a critical role in driver monitoring systems for autonomous driving. It uses machine learning algorithms and deep neural networks to analyze video data from in-car cameras, tracking the driver's eyes, head position, and other facial features to determine their alertness and attention to the road.
By detecting signs of distraction, drowsiness, or other potentially dangerous behaviors, computer vision systems can alert the driver or the autonomous vehicle's control system to take action, such as providing visual or auditory warnings or adjusting the vehicle's driving mode. Computer vision can also help improve the overall driving experience by delivering personalized settings, such as seat and mirror positions, based on the driver's facial recognition.
CIPIA (Continuous Identification and Prediction of Interaction Actions) is a computer vision company that provides advanced driver monitoring and assistance systems for autonomous driving. The company's technology uses deep learning algorithms and 3D vision sensors to detect and analyze drivers' behavior in real-time, including their eye movements, head poses, and hand gestures. CIPIA's system can detect drowsiness, distraction, and other dangerous driving behaviors and provide alerts or corrective actions to prevent accidents.
The company's mission is to enhance the safety and comfort of autonomous vehicles for passengers and other road users while providing a seamless driving experience.
To proceed further and understand CNN based approach for detecting home objects, one should be familiar with the following:
- Python: All the below code will be written using python.
- Tensorflow: TensorFlow is a free, open-source machine learning and artificial intelligence software library. It can be utilized for various tasks but is most commonly employed for deep neural network training and inference.
- Keras: Keras is a Python interface for artificial neural networks and is open-source software. Keras serves as an interface for the TensorFlow library.
- Kaggle: Kaggle is a platform for data science competitions where users can work on real-world problems, build their skills, and compete with other data scientists. It also provides a community for sharing and collaborating on data science projects and resources.
Apart from the above-listed tools, there are certain other theoretical concepts one should be familiar with to understand the below tutorial.
Transfer learning is a machine learning technique that adapts a pre-trained model to a new task. This technique is widely used in deep learning because it dramatically reduces the data and computing resources required to train a model.
This technique avoids the need to start the training process from scratch, as it takes advantage of the knowledge learned from solving the first problem that has already been trained on a large dataset.
The pre-trained model can be a general-purpose model trained on a large dataset like ImageNet or a specific model trained for a similar task. The idea behind transfer learning is that the learned features in the pre-trained model are highly relevant to the new task and can be used as a starting point for fine-tuning the model on the new dataset.
Transfer learning has proven highly effective in various applications, including computer vision, natural language processing, and speech recognition.
VGG19 is a deep convolutional neural network architecture proposed in 2014 by the Visual Geometry Group (VGG) at the University of Oxford. It is a variant of the VGG16 architecture, a popular convolutional neural network used for image recognition.
The VGG19 architecture comprises 19 layers, including 16 convolutional layers and three fully connected layers. The convolutional layers are designed to extract features from images of different scales and orientations, while the fully connected layers are responsible for classification.
One notable feature of the VGG19 architecture is that it uses small 3x3 filters for convolution, which allows it to capture finer details in images. The architecture also uses max pooling layers after every two convolutional layers, which helps reduce the input's spatial size.
VGG19 has been used for various image recognition tasks, including object recognition, scene recognition, and image classification. It has also been used as a feature extractor for machine-learning tasks, such as image captioning and retrieval.
In this blog, we will develop a small-scale prototype of our driver monitoring system, which describes the driver's state when driving in his vehicle. To proceed with our DMS system, we proceed in the following steps:
- We begin by importing the required libraries.
- Then, we visualize the dataset.
- Then, we create the train and validation data by splitting the dataset in the ratio of 80-20.
- Next, we create ImageGenerators, which perform data augmentation in real-time.
- Next, we create our model. For this, we use the concept of transfer learning.
- We finally train our model and make predictions on the given set of test images.
Figure: Flowchart for methodology
The Small Home Objects (SHO) image dataset available on Kaggle, created by Hossein Mousavi, is a collection of 1,312 high-resolution images of 10 small household objects. The objects include a calculator, stapler, scissors, pencil, keychain, pen, marker, eraser, ruler, and a paper clip.
The images were captured using a smartphone camera and saved in JPG format. They have a resolution of 4032 x 3024 pixels and are labeled according to the object they depict. The dataset is split into training and testing sets, with 80% of the images used for training and 20% for testing.
The dataset is intended for computer vision and machine learning research, particularly for object recognition and classification tasks. The dataset's small size and relatively simple objects make it a valuable resource for training and testing models for beginners or those interested in a more straightforward classification problem.
Below, I’ve attached a sample image corresponding to each activity class of the driver.
The below points are to be noted before starting the tutorial.
- The below code is written in my Kaggle notebook. For this, you first need to have a Kaggle account. So, if not, you need to sign up and create a Kaggle account.
- Once you have created your account, visit Distracted Driver Activity and create a new notebook.
- Run the below code as given. For better results, you can try hyperparameter tuning, i.e., changing the batch size, number of epochs, etc.
Hands-on with Code
We begin by importing the required libraries.
Next, we perform some data visualization and then perform a train-test split. For our project, we do a train-test split of 80-20.
I am plotting the data for each activity of the driver.
From the above plots, we can see that our model could detect drivers in 10 distinct states.
Now, we perform our train-validation split on the dataset. We have performed a split of 80-20 on our dataset, which means that 80% of our data is in the train set and the remaining 20% in the validation set.
Next, we perform some Data Augmentation on our dataset.
This code sets up generators for training and validation data for image classification using Keras' ImageDataGenerator class.
First, it sets the data generator's batch size and image size, with batch_size equal to 32 and img_height and img_width equal to 224.
Then, it creates an instance of the ImageDataGenerator class with no image augmentation, which will be used for the training data. Another instance is created for the validation data.
Next, it creates separate generators for the training and validation sets using the flow_from_directory method. The training data specifies the directory containing the training data (/kaggle/working/output/train), target size (224 x 224), batch size (32), and type of labels (categorical). It also specifies that the subset of data to use is the training set (subset='training').
The validation data specifies the directory containing the validation data (/kaggle/working/output/val), target size (224 x 224), batch size (32), and type of labels (categorical).
Next, we create a function for plotting loss and accuracy.
The code defines a function to plot the training history of a machine learning model, displaying both the validation accuracy and validation loss over epochs. It uses the matplotlib library to create two plots for accuracy and loss, with separate lines for the training and validation data.
Next, we create our model and train it for five epochs.
This code defines a new model by adding some layers to a pre-trained VGG19 model with ImageNet weights, excluding the top layer. The new layers consist of a flattened layer, three dense layers with ReLU activation functions, two dropout layers, and a final dense layer with a softmax activation function. The new model is then compiled with Stochastic Gradient Descent (SGD) optimizer, categorical cross-entropy loss, and accuracy metric.
The input shape of the pre-trained model is (224, 224, 3), which means that it expects images with a width and height of 224 pixels and three color channels (RGB). The output shape of the new model is (10), which predicts ten different classes.
For this project, we have used vgg19 due to following reasons:
- Strong performance: VGG19 has achieved state-of-the-art performance on various image classification benchmarks, such as the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset.
- Transfer learning: VGG19 has been pre-trained on large-scale image datasets, such as ImageNet, which allows it to be used for transfer learning.
- Training Time: VGG19 has a relatively simple architecture compared to more recent deep neural networks such as ResNet, DenseNet, and EfficientNet. This simplicity can make it faster to train than these newer architectures.
After the training is complete, we plot the loss and accuracy curves.
Figure: Accuracy Plot and Loss Plot
From the above two plots, we observe that:
- The loss decrease and converges to a value of 0.
- The accuracy increase and converges to a value near 1.
Also, we don't see much oscillation in the curve; thus, our learning rate is also fine.
In conclusion, driver monitoring systems are increasingly important in developing autonomous driving technologies. These systems must accurately detect and classify a wide range of driver behaviors in real-time using advanced computer vision and machine learning algorithms.
In the above tutorial, we saw the implementation of an activity detection of Distracted-Drivers using Keras' ImageDataGenerator class. The tutorial starts with importing the required libraries and performing data visualization, followed by a train-test split. The data is then augmented using the ImageDataGenerator class.
A function for plotting loss and accuracy is defined, and a model is created by adding layers to a pre-trained VGG19 model with ImageNet weights, excluding the top layer. The new model is then compiled with a Stochastic Gradient Descent optimizer and categorical cross-entropy loss.
Finally, the model is trained for five epochs, and the loss and accuracy curves are plotted. The tutorial also suggests that hyperparameter tuning can be performed to improve results.
Companies like CIPIA provide advanced driver monitoring and assistance systems for autonomous driving using deep learning algorithms and 3D vision sensors. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506481.17/warc/CC-MAIN-20230923130827-20230923160827-00558.warc.gz | CC-MAIN-2023-40 | 13,200 | 72 |
https://careerhub.students.duke.edu/classes/vmware-nsx-t-3-0-essential-training-01-vsphere-networking-essentials/ | code | VMware NSX-T 3.0 Essential Training: 01 vSphere Networking Essentials
VMware NSX is the most disruptive network technology in recent memory. Demand for employees who understand NSX will continue to grow as the product reaches maturity. In this series of courses, VMware Certified Instructor Rick Crisci helps you understand all the concepts behind NSX-T 3.0. Rick begins with basic networking. He covers networking fundamentals like the OSI model, Layer 2 switching, and maximum transmission units (MTU). Rick goes into ethernet broadcasts, the spanning tree protocol (STP), and base IP standards. He also discusses virtual networking basics like the vSphere standard switches and distributed switches. This course helps you prepare for the VMware VCP-NV exam. To take the VCP-NV exam, you will need to complete some course requirements from VMware. Be sure to check those requirements, as well, as you prepare to get certified.
Note: This course was created by Rick Crisci. We are pleased to host this training in our library. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00592.warc.gz | CC-MAIN-2023-06 | 1,027 | 3 |
https://blog.bitscry.com/2017/06/26/decimal-to-hexadecimal-in-sql/ | code | While trying to convert some integers to hex values in SQL I found a few options that worked on ints and bigints but nothing that worked on anything larger and as I was needing to convert some integers of datatype decimal(20) I created the below function which is based on that found here.
The only real change other than the input datatype was adding a floor on the division of the value by the base to remove the decimal places added by the decimal datatype.
CREATE FUNCTION ConvertToBase ( @value AS decimal(20), @base AS INT ) RETURNS VARCHAR(MAX) AS BEGIN -- some variables DECLARE @characters CHAR(36), @result VARCHAR(MAX); -- the encoding string and the default result SELECT @characters = '0123456789abcdefghijklmnopqrstuvwxyz', @result = ''; -- make sure it's something we can encode. you can't have -- base 1, but if we extended the length of our @character -- string, we could have greater than base 36 IF @value < 0 OR @base < 2 OR @base > 36 RETURN NULL; -- until the value is completely converted, get the modulus -- of the value and prepend it to the result string. then -- devide the value by the base and truncate the remainder WHILE @value > 0 SELECT @result = SUBSTRING(@characters, @value % @base + 1, 1) + @result, @value = floor(@value / @base); -- return our results RETURN @result; END go select dbo.ConvertToBase(18446744073709551615, 16) --ffffffffffffffff | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00434.warc.gz | CC-MAIN-2023-40 | 1,383 | 3 |
http://vallerand.me/?paged=2 | code | Finally, after the official announcement of the CFI (Innovation Canada) made today, I can also officially announce that I received 887 000$cad (329k$ from Innovation Canada, 329k$ from the Quebec Superior Education, Research, Science and Technologies department and 227k$ from private funding) for the acquisition of a laboratory on smart cities and mobile computing, which will be called LIMVI (in french Laboratoire de recherche en Informatique Mobile et Villes Intelligentes). The team around this funding is composed by me (main investigator), Évelyne Vallières from LICEF/TELUQ, Bruno Bouchard from UQAC, Sylvain Giroux from Université de Sherbrooke and Dragos Vieru from LICEF/TELUQ.
With this funding, we will acquire 3 groups of components:
- a mobile laboratory i.e. an highly equipped and adapted recreational vehicle to conduct studies and validations directly in urban environments;
- several wireless sensor nodes + sensors for urban deployment
- a fleet of wireless devices (smart phones and tablets) for development and field experimentations.
This research infrastructure is unique in Canada and will help my team and my colleagues involved to develop new assistive technologies for users in urban environments that are context-aware and adapted to user profiles. These assistive technologies will mainly aim to improving the quality of life of seniors and persons with special needs in their activities of daily living in the smart urban environments.
On December 2nd, I had the honor to break the ice and be the first presenter in the Context-Awareness track on the ACM International Conference on Advances in Mobile Computing and Multimedia (MoMM2013), Vienna, Austria.
The reference to my paper is :
- Gouin-Vallerand Charles, Jesus Alberto Montero De la Cruz, “Analysis of a context-aware recommender system model for smart urban environment”, In Proceeding of the 2013 ACM Conference on Advances in Mobile Computing and Multimedia (MoMM2013), Vienna, Austria, December 2013
I had fun to visits the several Christmas market and drank cups of Gluhwein. Cheers !
I’m pretty proud to announce that my colleagues and I (Prof. Evelyne Vallières, Daniel Lemire and Richard Hotte) signed an agreement for a research project with the Foundation of the Canadian Automobile Association (CAA), division Québec, for 180k$ for 2013 to 2015. This project will focus on the effects of fatigue (tiredness) on elder drivers (55+ … I know that people between 55 and 65 don’t want to be categorized as “elders” ) and will study retroaction methods, based on a coaching approach, to increase the self-awareness of the drivers face to their driving behaviors. Personnally, I will work on the Computer Science part of the project, where we will build drivers’ models based on the car, contextual and users’ information. We are currently experimenting different machine learning algorithms with prototype data from the car and a driver (me !).
This research project will extensively use the “Smart” car LISA (french acronym for smart laboratory on road security), a car full of sensors, camera and facial recognition systems, which is financed by the CFI (Nomade Project, 400k$).
I will post more info on this exciting project in the next months, so stay tuned!
I was giving this morning (March 7) a talk on my researches at the University of Quebec at Montreal (UQAM) for the Ph.D. student in Cognitive Computer Science, entitled “L’intelligence ambiante au service des personnes dépendantes” or if you prefer : “The Ambient Intelligence at the service of dependant people”.
You can look at my slides at : http://prezi.com/fzky2afrlipe/seminaire_dic_7mars They are only in french !
After 5-6 years of post-graduate education between the Quebec and France, 1 year of Postdoc at CMU in Pittsburgh and 2 marvelous kids, I finally reach my goal : get a position as professor in computer science in a university in Quebec, the Télé-Université du Québec.
Sometime, it’s fun to remember all the path I followed to reach this goal and all the people (mostly my family …) asking when I would finally get a job (for a lot of them, doing research during a Ph.D. is not a job). I can officially say that at 31 years old, I have my first official job ! (internship and student job doesn’t count )
Ok, enough talk about that. Let me introduce this new website and blog. In that blog, you will find all the information about my research interests, projects, publications and topics I will write, from computer science to research in general, maybe a little bit about politic and surely some stuff about my next trips (conferences or not). You will probably notice several english grammatical errors in my future posts, I apologize, french being my first language, I’m still working to improve my english (even if I wrote several scientific papers in english). So, stay tune and don’t forget to submit comments ! | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153392.43/warc/CC-MAIN-20210727135323-20210727165323-00162.warc.gz | CC-MAIN-2021-31 | 4,947 | 18 |
https://outlook.uservoice.com/forums/910579-outlook-com-on-the-mobile-web/suggestions/39333688-log-in | code | When logging in, the process is to enter your email address first. After typing it in I can click the return key to move on to the password potion of logging in. However, after entering in the password, the return key does not log you in. You need to scroll down to physically select the sign in button. This is somewhat annoying. | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886516.43/warc/CC-MAIN-20200704170556-20200704200556-00171.warc.gz | CC-MAIN-2020-29 | 330 | 1 |
https://blog.stoplight.io/using-component-libraries-for-effective-api-governance | code | API design can be a challenging task, especially when dealing with large and complex systems. One of the biggest challenges is ensuring consistency across the different API projects within an organization. When multiple developers work on separate projects, it can be challenging to maintain a unified and consistent approach. Additionally, when changes need to be made to existing APIs, updating all of the APIs that rely on that component can be particularly taxing.
This is where Component Libraries come in. Component Libraries, a newly released feature in Stoplight, helps promote consistency and reuse, allowing developers to create reusable API models and consume them in multiple API projects.
What are Component Libraries?
A Component Library is a set of reusable models that can be consumed across multiple API projects. By creating these reusable models, you can promote consistency and reduce duplication across their API programs.
This feature allows API program managers to define and publish models, which can then be consumed by other projects rather than having to recreate the same model multiple times. This approach can significantly reduce the amount of time and effort required to create and maintain API projects. You can also update the reusable models in one place, and the changes are immediately available in all the projects that consume the model.
What kind of OpenAPI models should be added to Component Libraries?
Any commonly used and reusable models that can be shared across multiple API projects should be added to Component Libraries. Some examples of OpenAPI models that can be added to Component Libraries include:
- Data models: Models that describe the structure and format of the data used in the API. For example, a customer data model that contains fields like name, email, phone number, etc.
- Authentication models: Models that define the authentication requirements for the API. For example, a JWT authentication model that specifies the token structure and validation requirements.
- Error models: Models that describe the error responses returned by the API. For example, a standard error response model that includes a message, error code, and description.
- Response models: Models that describe the structure of the responses returned by the API. For example, a response model that describes the structure of a product object returned by an e-commerce API.
In general, any models that are used repeatedly in different API projects and can be defined in a reusable way can be added to Component Libraries.
Getting Started with Component Libraries
Creating a Component Library
To create a Component Library, you’ll need to identify the models that should be reused across different API projects. Once you have a list of models, you can create a new Component Library project in Stoplight.
In the Component Library project, you can define and document the models that should be reused. It’s essential to ensure that each model is documented to make it easy for other designers to understand how to use them.
After you have defined and documented your models, you can publish your Component Library. Once your Component Library is published, it is accessible to other developers who can consume the models in their API projects.
Socializing the Component Library
To promote the reuse of the Component Library models, you’ll need to socialize it within your organization. You can do this by sharing the generated documentation with other developers and teams who may be interested in using the models.
Consuming the Component Library models
After you have socialized the Component Library, designers can consume the models in their API projects. When designing an API project, they can add a reference to the Component Library model, and it will be automatically added to the project.
When a model is added to an API project, it is locked for editing. If changes are required, they must be made in the Component Library project, and then the changes can be
pushed to all the API projects that consume the model. This ensures consistency across API projects and eliminates the need for duplicating the same model in multiple projects.
Feedback, Review, and Iteration
As API programs evolve, it’s essential to ensure that the Component Library models are updated accordingly. This includes iterating on existing models and adding new models as needs arise. With the Component Library feature, updates made to a model in the Component Library project automatically result in a notification in all the API projects that consume the model. It’s important to review and accept the updates to ensure that they do not negatively impact the API projects.
The Component Library feature in Stoplight offers can be used in conjunction with Comments and Style Guides as a comprehensive feedback and review system that enables API governance at scale.
In the end, our new Component Libraries Beta feature helps create models, share them, and consume them easily with Component Libraries. You can give it a try for yourself today by logging into your account if you’re on a Pro or Enterprise Workspace. If you have feedback for our Beta, let us know here. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00663.warc.gz | CC-MAIN-2024-18 | 5,198 | 27 |
https://4archive.org/board/g/741 | code | What is the best way to convert a smartphone into a laptop?
Pic related. It's https://www.sentio.com/
Install taskbar on f-droid
Plug phone on tv
plug keyboard and mouse to phone
Install wps office
Alternatively you can:
Install FOSS vnc server on your phone
Install tightVNC client on laptop
Like, regulary? What does it bloat? Or can it be even prevented?
Well first of all, windows was originally built on an architecture that used C++ and kernel in C. Mac uses more stable unix / objective c quirky but controlled environment.
Windows bloats because of registry issues, bad data mapping onto hard disks(fragmented drives), bios malfunctions and corruptions in early years, and not to mention the ever inescapable bloat ware that is ore installed to keep your computer "up to date" ...to put it simply, it can be slowed, but the aging and impending doom of your systems inability to allocate memory properly and keep away the "bugs" will concede to be your inevitable downfall
Poorly coded software that fucks up everything, installs endless outdated libs and frameworks, shits all over your registry which then gets too big for its own good, does the same thing on your filesystem which it fills up with temporary data that's never cleaned... Windows is also to blame, it piles up tons of logs and data you'll never read nor need and that should be disabled by default, it keeps backups of past updates for no reason which a fresh install all compiles as a single one...
You could avoid it by avoiding bad software, but then you would only be using paint and notepad. Or you can prevent it with tight administration, but while it's worth it for a large company, that's a lot of work for your sole home computer, and formatting once in a while is much quicker.
Is it legal to stream and upload local police via SDR?
What's a good Pale Moon extensions/themes/addons/configuration, whatever?
How can hacking / creative coding be used for the forces of good?
Basically companies pay you to find flaws and exploits in their software and systems so that they they can fix them before some malicious hacker finds it later on down the road
Or if you want to be a white knight vigilante faggot you could just hack people that no one likes and fuck with them or help catch criminals and shit
Some cryptonigger in my town is selling his old mining rig and is selling GTX 1070 cards. They're about year old and he's been mining with them.
He claims in an ad that they work fine (he's selling them because they're not as profitable anymore).
Do silicon chips degrade over time? Is it worth it buying old GPUs that have been run hard (I'm guessing 24/7) for a whole year? How quickly does the silicon degrade and begins to fail?
They have no moving parts which degrade, only heat can damage them, still the chips can take some damage and continue dunctioning by turning off the damaged parts which leads to a loss of performance. To keep a chip alive as long as possible either cool it very well or turn down its clock speed and voltage, there's no real way to tell whe your chip will fail during normal operation, could be the day you bought it, could be in 30 years.
Why are people so stupid?
>Why are people so stupid?
Why are you fucking blind?
So delta.chat uses autocrypt in order to do the whole crypto magic (PGP) seemlessly. It uses email in order to be able to communicate as if you were on telegram. So it has the benefits of using the most old school tech and being able to communicate efficiently.
There are some downsides, when you are talking to non-delta.chat users. THeir mail will be spammed in the inbox. When they respond they will not see the particular thing you are responding to.
Pretty much gave up on email a while ago, it is things like this that make me think its not completely over just yet. It is decentralized, and the problem of different clients will be something that will be solved pretty quickly in the near future. I don't think it will be hard to package that program into different operating systems. All they need to do is make it a chrome app, and pretty much you are now cross platform. Done.
Here are some links for people to read regarding autocrypt that I found on the interwebs dealing with common questions regarding what I am talking about.
I think it shows promise especially once they clients working everywhere. Improve the multi-device ability on same account.
>Why should I use this instead of or in addition to Telegram, again?
Not to shit post on what you said, but literally I just gave a ton of links on top explaining what the difference is. Primarily, to use telegram YOU need to be on telegram. To use email, well you don't need anything other than email, which is universal. So it adds the functionality of chat like client to email that is the entire point at least.
Also Telegram for group chats, IS not encrypted. They use encryption that is novel and not well tested, do I need to keep going? I use telegram, only because it is convenient. But once they make this email thingy take off, it will be so much better than anything out there. Nothing is faster, more efficient, than email is. Better battery life, better bandwidth saving, and on top of that having the convenience of not having to convince someone to yet again join another network. All they would have to have is some compatible client that gives them the chat like experience. Like I said the current downside, is that there isn't such a client for all platforms, but that should be resolved quite quickly.
ITT: ideas of apps/services that you can't program or be bothered to program yourself
Mine would be an app that permits you to send text messages from a computer through bluetooth, no text goes through some botnet server like Pushbullet or Airdroid
a program to make my wife real
I made some css rules for 4chan.
>Inb4 minimalism is gay
PS Stylish is a botnet, use Stylus
2bh you guys may be confused as to why I simultaneously shill various operating systems and why I at other times mock others for using those same operating systems. The truth is, I don't even use computers and I thought we were all just tech illiterate people roleplaying as computer users.
>why I simultaneously shill various operating systems
Because you're bought and paid for, and your masters tell you to.
>why I at other times mock others for using those same operating systems.
Because of your guilty conscience.
This is the absolute current state of Intel
>thousands of lines of code
>doesn't do anything
OOP in a nutshell
Can someone tell me what this connector is called, there is a cigarette adapter that hooks up to this but I keep losing it. | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00056.warc.gz | CC-MAIN-2020-34 | 6,653 | 50 |
http://www.mrsevansenglishclass.com/12th-grade/wednesday9246150 | code | I can interpret a soldier's experience in war through allusion and imagery.
- Get out notes for The Soldier's Heart. Finish Film. Discuss: what does this documentary communicate about the soldier's experience? Choose a quote about what the men "carry" from chapter 1. Make a connection to what you saw in the film.
- Lazarus- read story with partner
- Tunnels- we will watch this together
- Ted Lavender's Death: Passage Analysis/Annotations. What does this passage communicate about death?
- Begin HW
- HW: Read through On The Rainy River | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475422.71/warc/CC-MAIN-20240301161412-20240301191412-00050.warc.gz | CC-MAIN-2024-10 | 539 | 7 |
http://www.jenitennison.com/blog/taxonomy/term/8/9 | code | I’m still thinking about doing automatic markup with XML pipelines, and the kind of components that you might need in such a pipeline. These are the useful ones (list inspired by the components offered by GATE):
Michael Sperberg McQueen (CMSMcQ) has written a couple of interesting posts about datatypes in W3C’s XML Schema (XSDL). (The second is a response to a comment from John Cowan, and attempts to justify some of the seemingly arbitrary decisions made in the set of datatypes present in XSDL 1.0.) The posts are a discussion of one of the issues against XSDL 1.1 raised by Michael Kay:
Michael proposes: just specify that implementations may provide additional implementation-defined primitive types. In the nature of things, an implementation can do this however it wants. Some implementors will code up email dates and CSS lengths the same way they code the other primitives. Fine. Some implementors will expose the API that their existing primitive types use, so they choose, at the appropriate moment, to link in a set of extension types, or not. Some will allow users to provide implementations of extension types, using that API, and link them at run time. Some may provide extension syntax to allow users to describe new types in some usable way (DTLL, anyone?) without having to write code in Java or C or [name of language here].
Yes, I’m determined to write up every talk I attended at XTech 2007, so that I have a record of it if nothing else. On Wednesday afternoon, I attended sessions on microformats, internationalisation and NVDL (as well as giving my own talk, of course).
Since there’s next to no ‘net connection at XTech 2007 (obviously the Web is not so ubiquitous as all that), I have nothing to do in the sessions but listen! Here are some thoughts about the sessions that I attended on the morning of Wednesday 16th. I haven’t included the keynotes not because they weren’t interesting but because I can’t think of anything to say about them at the moment.
Argh. I’ve been contacted by the guys at WikiCreole who want me to change the name of Creole. What should I do? Not only is “Creole” a great name for a schema language that deals with concurrent markup, but it’s a great acronym too (Composable regular expressions for overlapping languages etc.)
I did Google when I first came up with the name in August 2006, but didn’t discover WikiCreole (unsurprisingly, since it was only coined in July 2006 itself). But now far more many people know, care about and use WikiCreole than Creole grammars. So any suggestions for alternative names? | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711605892/warc/CC-MAIN-20130516134005-00064-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 2,596 | 7 |
https://forum.up-community.org/discussion/4093/some-questions-about-ubuntu-installing-and-kernel-changing | code | Some questions about Ubuntu installing and kernel changing
Recently I bought a UP Xtreme, but unfortunately, I got some problems with how to use it smoothly. I got the following two problems, it'd be greatly appreciated if they were solved.
the first trouble I encountered is about Ubuntu installing. After installing the Ubuntu system, it can't boot succussfully, and it enters \ the UEFI or grub instead of the system. I installed it many times and only several times I made it(for some reason I uninstalled it then).
the second is about changing the kernel. the after-sale servicer told me that I need to change the kernel of the Ubuntu system so that the maker board can perform best. however, I did exactly as the instructions !
), it failed to boot after the kernel change.
it's my first time to change the kernel of a system and I'm really confused. Hope anyone could help me figure it out. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00293.warc.gz | CC-MAIN-2021-21 | 897 | 6 |
https://pisquare.osisoft.com/thread/43411-piwebapi-performance-issues | code | We're running into a periodic or sporadic issue with our PI Web API servers. There are two PI Web API 2019 servers (v. 22.214.171.12445) servers sitting behind a BigIP LTM for load balancing. Both are running on Windows Server 2016 standard with 8gigs of memory. Nothing else is running on these servers. I think I've ruled out the load balancer as we experience the issue whether we are making calls directly to an individual server or via the load balancer.
The calls were are making are pretty simple, just a call to the streams controller and getend action. Each call returns about 600 bytes. We're looping through an array of about 15 web ids and making the call using jquery's $.ajax method.
Here is a sample call:
It seems to be that if the PIWebAPI is idle for long enough, the first call will take between 15 and 40 seconds. Subsiquent calls have sub-second response times. I'm guessing the subsiquent calls are faster due to caching, but why is that initial set of calls taking so long? And is there any way to speed it up? | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141706569.64/warc/CC-MAIN-20201202083021-20201202113021-00061.warc.gz | CC-MAIN-2020-50 | 1,033 | 4 |
https://practicaldev-herokuapp-com.global.ssl.fastly.net/gortron/sso-oauth-2-0-vs-openid-connect-vs-saml-3kj | code | I was asked recently in an interview to explain my understanding of OAuth 2.0, OpenID Connect, and SAML. I had heard of the terms before, and knew they had to do with single sign-on and authenticating users. I had implemented auth in Rails using sessions, in Node using JWT, and again with Auth0, but couldn't speak to the differences between these SSO frameworks. In this post, I'll briefly summarize these frameworks, their differences, and what they're commonly used for.
OAuth is a framework for delegated authorization. It is designed to be generic and flexible. In OAuth, an identity provider (IDP) issues tokens to other services with the user's approval. This lets an application access resources from a server on behalf of the user, without having to share the user's credentials. It allows for sign-in, as well as sharing contacts across applications, or gaining access to a third-party service.
OpenID Connect (and SAML) are frameworks for federated authentication. OpenID is built on top of OAuth. It uses JWT to issue
id_tokens, which include information about the subject (who is authenticating), the issuer (who is issuing the token), and the necessary authentication information about the user. Passport, a popular Node module for auth, can be configured as OpenID.
OpenID is specifically designed for user authentication. It's used widely in consumer and mobile applications. It allows for third-party sign in features, like "Sign in with Google".
SAML (security assertion markup language) is an authentication standard independent from OAuth/OpenID. It uses a flavor of XML, shared between identity providers and service providers, to authenticate and authorize users. The identity provider will write an assertion, which is an XML document that includes information on the subject, the issuer, and the authentication information.
SAML is commonly used in enterprises that need to manage authorization and authentication for a suite of different software tools. In SAML, the service provider is almost always a web app (a hangover from its early days as an SSO solution for Windows Active Directory), which makes it not ideal for mobile.
Like any other technology, each SSO strategy has its tradeoffs and teams evaluate their specific use case before selecting a strategy. I hope this brief introduction was helpful! I have some examples of different authentication strategies on display in my GitHub profile. | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00463.warc.gz | CC-MAIN-2022-21 | 2,427 | 8 |
http://verblegherulous.zenandtaoacousticcafe.com/2022/11/overheard-at-table-3-lucky-and-otis.html | code | Tuesday, November 15, 2022
Overheard at Table 3: Lucky and Otis - Pirates or Ninjas?
Otis Redwing: Pirates or Ninjas?
Lucky Moran: For what?
Otis: Just saw it online, someone asking which one is cooler: Pirates or Ninjas.
Lucky: Pirates of course!
Otis: Why "of course"?
Lucky: 'Cuz no one ever sang sea shanties about ninjas!
Otis: Sure they did ... you just couldn't hear 'em!
Lucky: Ohhhhh good one! | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100164.15/warc/CC-MAIN-20231130000127-20231130030127-00582.warc.gz | CC-MAIN-2023-50 | 402 | 10 |
http://manpages.ubuntu.com/manpages/dapper/man9/ata_scsi_rw_xlat.9.html | code | Provided by: linux-doc-2.6.15_2.6.15-23.39_all
ata_scsi_rw_xlat - Translate SCSI r/w command into an ATA one
unsigned int ata_scsi_rw_xlat (struct ata_queued_cmd * qc,
const u8 * scsicmd);
qc Storage for translated ATA taskfile
SCSI command to translate
Converts any of six SCSI read/write commands into the ATA counterpart,
including starting sector (LBA), sector count, and taking into account
the device’s LBA48 support.
Commands READ_6, READ_10, READ_16, WRITE_6, WRITE_10, and WRITE_16 are
Zero on success, non-zero on error. | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158601.61/warc/CC-MAIN-20160205193918-00022-ip-10-236-182-209.ec2.internal.warc.gz | CC-MAIN-2016-07 | 532 | 11 |
https://michaelrodbell.com/2014/02/03/global-software-development-training-the-new-team/ | code | Getting back to the previous thread about building global teams.
As you assemble the new team, one of the first, and most important activities will be to get the new team ready.
Consider the following:
- Hire people with the right skills and aptitude (surprise!). Bring in people with a good combination of technical skills, along with communication and interpersonal traits that will help them learn quickly. Smart, engaged team-players who fit your operating model are the most likely to succeed, and build sustainable teams.
- Focus your training efforts. Take inventory of the work that you’d like the team to start with. Ideally, its something meaningful, but somewhat less time critical than the work your more established teams are doing.
- Build a strong foundation. Think a bit about what you’d like the team to be doing further along, once the initial tasking has completed.
- Plan activities. Based on the upcoming assignments, identify the major gaps that need to be filled to get the team to the point where they are able to contribute on the initial tasks (presumably learning on their own, and knowing where to get answers).
- Identify the sources of training. Some will be written, some will require interactions with longer term employees.
- Seed the team. If you’ve taken the hiring path I suggested in my earlier post, you should have been able to get more seasoned, technical leads on board first. These people will become a valuable bridge. Help them become effective as quickly as possible.
None of this should be a surprise. As you plan and move forward in training the team, concentrate on building productive and strong work and social relationships among team members in the various locations. As the team gets to know one another, they will become more comfortable asking questions and sharing information. This will pay huge dividends. Your goal will be to establish a supportive, and collaborative relationship across all groups. When you’re able to do this, progress will come at a fast pace.
I view this process as coming in three general stages, preparation, familiarization, and ramping productive work. Lets take a look at each of these phases.
While you’re busy recruiting, think about what will help the new team members become productive. Take inventory of documentation, build catalogs/directories of key pieces of information. This can include pointers to tooling, source code, test resources, and a directory of who’s who. Consider having some of your key leaders or senior members of the technical staff prepare materials to present to the team. In some cases, recorded sessions and tutorials may be helpful. Also, don’t underestimate the value of budgeting for travel. You may want to solicit support for having team members travel to spend time with one another.
Once the new employees start to arrive, you’ll want to move quickly to familiarize them with your products, services, and the work that you’ve outlined for them. Get people together. Have your business leads provide overviews of how your products and services are used by customers. Have your tech leads walk through the system architecture, and help the team understand the working environment and tools. Have your project leads or directors provide an introduction to how projects are managed. This can include overall process guidance, pointers to tools, and an open Q&A session. You’ll want the team to be ready to interact and collaborate in a transparent & consistent manner. You’re also likely to be training the trainers. There may be other team members who have yet to arrive. The more that you can train their future office mates, the easier things will be in the longer term.
As quickly as you can, get the new team members working on active projects. In many cases, it can be helpful to engage the new team members in sorting out defects and resolving issues in areas of the system. This type of work is a good catalyst for learning the environment, tooling, and application. Assuming that the issues are reasonably easy to reproduce and isolate, defect remediation work helps provide context for those starting to learn the new system. Outside of defect fixing, it may also be useful to concentrate on small enhancements to the product. In general, find work that can be mastered in a shorter period of time, and that gives the new team members good opportunities to contribute and learn.
In the past, I’ve also found that it can be helpful to build out an inventory of key skills and develop a dashboard that can help visualize the coverage of knowledge that you’d like the team to obtain. If you decide to monitor their progress in skills acquisition, also be prepared to adjust your goals, as they are likely to change as the product and competitive environment will evolve in tandem with your team.
These phases of your project can be both interesting and challenging. They provide a great opportunity for everyone, including the legacy employees to learn, and expand their capabilities. Pay attention, and ensure that people are sharing information, and good things can happen.
Pingback: Global Software Development Processes/Approaches | Managing as Little as Necessary | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653631.71/warc/CC-MAIN-20230607074914-20230607104914-00712.warc.gz | CC-MAIN-2023-23 | 5,220 | 17 |
https://www.minecraftforum.net/forums/minecraft-java-edition/creative-mode/367965-how-to-turn-an-already-made-area-into-an-island | code | I'm making an adventure map and I really need to know how I can turn a large area into an island. pretty much, I want to explore around and uncover a good amount of area and biomes, then somehow turn what I have uncovered into an island. I'm trying to do with with as little "fixing" as possible. (flat lines where chuncks were deleted and such)
Use a flatland preset for clay, go into it, place down 1 bedrock, then log out. After that change to version 1.5.2, and all the clay would be gone, just the bedrock sitting there. Use MCedit to make a massive island. | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00499.warc.gz | CC-MAIN-2020-34 | 562 | 2 |
https://www.freelancer.jp/job-search/aspnet-facebook-application-add-section-button/ | code | I want someone who can work for us to give analysis of writing section tasks in TOEFL & IELTS [ログインしてURLを表示] is required with these exams .Kindly send some sample analysis done with resume.
Hello all i want to ad three section which i am attaching as in mock-up in the woocommerce [ログインしてURLを表示] calculation must be visible and from admin panel as well. here is the link [ログインしてURLを表示] go to COST & BOOK there i want to add , Budget is max 50 USD DO NOT BID MORE THAN THIS
Seeking VBA Script to Print Defined excel range and save to PDF format from Print button. Need naming convention of file to reference name tags when saving. Naming convention should be: "FOUO - Artifact Table (MEMO NAME, PLATE NUMBER, REV).PDF Looking to save in C:/TEMP folder or active directory and then potentially open up outlook email with it
I need to hire individual who can provide me LEAD for New demat & trading accounts for equity & commodity section for LKP SECURITIES ( One of the finest brokerage house in india ...operated since 1948...Benefits: no amc , no hidden charges, good limits ) .Brokerage will be negotiable for [ログインしてURLを表示] opening of new accounts the individual will get handsome
Hello, for testing you, send me 5 websites as described, with submit code button, from Italy in italian language. And send links and screenshots showing the submit coupon button inside the website.
We have some edits to an existing Wordpress website. We require the header to be recreated with the addition of a button and link. We also require a button to be added to a specific page layout. All work must be accurate and tested for desktop, tablet and mobile view. This will be simple for an advanced Wordpress developer. Please only apply if you
...payments Print and PDF option User email verification on registration User menu in the header Invoices system Email notification Pay per post system Comments system Billing history Add to favorite option Custom Banner +16 Pre-Made Layouts 100% RTL Supported Arabic Pre-made layout Included MailChimp Integrated PayPal Integrated Interac payment Integrated Credit
Need to add ID into column view in booking sections of Wordpress. Was there previously however since a plugin update it has disappeared
I need an binary options trading site like olymptrade and users should also be able to invest on the platform if they cannot trade. like an in built investment platform
Looking for someone to Cut a helmet STL file into smaller sections to allow for easier 3D printing.
Hello, I need a section for 2 banners (just images) on the main page. I want it adaptable with every resolution Just need a php file to integrate it, that loads 2 different images (size: one the size of 6 columns and one of 4) Budget: $10
We are two entrepreneurs wanting to create an online app based on an existing stand alone application. The application targets the property investment market, and gives the user quantitative insight into potential buys. This task will require you to create source code for a bond calculator. The end result will be a tool that accepts a few inputs from
After I created a SSL certificate, the last page on the checkout section is blank can somebody help?
Plz share total cost of this project in biding section . I am not pay no extra cost. Sample like [ログインしてURLを表示] [ログインしてURLを表示]
...finish it ASAP, like by one day. ** A section is an area in elementor frontend editor where we can drag and drop any widgets. Currently in elementor, "section" can not be used as controller. But I want something that can be used like other normal controls within any widget. The main purpose is to use this section control into repeater within a widget
On an actual website using wordpress, i need to : 1 - Create an invoice in pdf with a specific design. 2 - Create a button " Invoice" with a specific rule of display.
raspberry pi: push button to play video, then use rfid to open magnetic door
This is going to appear in our web-pages as a button to link to a self-help legal forms preparation site and will also be featured in our brochures and other outreach materials. We have named the logo "Self-Prep and File". One thought was of a button with those words around the perimeter with a green light (or green stop light) glowing in the center
Hello, I would like to add a page to my website where current events are posted. Here is the site we're adding the page to [ログインしてURLを表示] I want the page to kinda look like the below template. Just a clean and easy way to add new posts for different articles.
I am looking for a Shopify expert to adjust my Shopify store code for the "Ad banner section" from 6 photos to 10 photos with automatically "full frame" Also must be mobile layout friendly. Please bid if you can start right away. [ログインしてURLを表示] Thank you.
Woocommerce shop add to cart button change
There is a simple BBB style customization needed. It can be done in two "stages": The first stage (ASAP): change the BBB logo and the copyright note to the custom ones. The second stage (can be done later, but not necessarily): remove demo message in the chat. All the details can be found in file attached to the project.
I need an "WooCommerce"/WP Courseware update plugin in my WordPress website [ログインしてURLを表示] at my "Products" section, under selection of "Default sorting"; to give the following options: "Default sorting" "Sort by Certificates" "Sort by Courses" "Sort by Diploma" "Sort by price: low to high" "Sort by price: ...
I need a landing page for a small startup firm, It is a single page, multiple section. Cheers Dee
Extra task for Elancesoft I've done 2 cards: Remove unrequired details on checkout Add 'view details' buttons to shop items on shop page so there is 2 buttons for each item - one add to cart, one view details. other button can be the twofish blue. The price for this is $50
I need a sample blank page in php with a button when clicked will take to paypal sandbox and show the success result. That's it. It will be only a sample blank page.
Country cannot be loaded with address->id_country We are getting this error on Prestashop. Please PM if you really know the solution. NO TIME WASTERS please. Please reply TWO + THREE in your proposal so I know you read full description
add animated button and add copy to clipboard functionality to my wordpress site
Hi, I need assets for an infinity jumping game developed with Unity: low poly objects and some minimalistic UI design. More information will be shared with selected applicants. Please provide examples of previous games you developed assets for, either as a video or as a link to the Google Play Store or to the App Store.
I want to make small new section in the homepage for users to be able to subscribe, right now the subscribe is in the footer and does not look good, i want to remove that and make a section above the footer see attached also i need it to be showing good on mobile | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658662.31/warc/CC-MAIN-20190117000104-20190117022104-00513.warc.gz | CC-MAIN-2019-04 | 7,071 | 30 |
https://optics.ansys.com/hc/en-us/articles/360042304234-Phase-shifted-Bragg-grating | code | In this example, we will use MODE' EME solver to study the effect of adding a phase shift to the Bragg grating to create a resonance peak within the stopband. This design can be used as a filter in an integrated optics circuit, as well as a sensor for biological sensing applications.
In this Bragg grating design, a phase shift is introduced at the center of the device to create a Fabry-Perot cavity with mirrors formed by the gratings on each side. This phase shift will lead to a sharp resonance peak within the stop band of the transmission spectrum.
The setup for the phase-shifted Bragg grating in shift_Bragg_eme.lms is similar to that of the waveguide Bragg grating example, with a few modifications required to accommodate the cavity region at the center of the grating,
Under the EME setup tab, we define 5 cell groups for the EME solver, 2 for the input and output waveguides, 1 for the center phase shift region and 2 for the gratings on each side. Note that 2 cells are used for cell groups 2 and 4, 1 for each waveguide width. For the initial simulation, we will use 10 modes for each cell group in the EME calculation. Symmetry is used here to reduce the number of modes required for this calculation.
To set the periodicity of the grating, we will define 2 periodic groups under the "periodic group definition" table, 1 for the gratings on each side of the cavity. This means that cell groups 2 and 4 will be propagated 100 times, and the final length of the device will be 66.32um.
Since EME is a frequency-domain method, we will need to run 1 simulation for each wavelength of interest. The script shift_period_sweep.lsf will calculate the results at each wavelength and plot the transmission spectrum of the phase-shifted Bragg grating.
The transmission spectrum of the phase-shifted Bragg grating with 100 pairs of gratings on each side of the cavity is shown below. One can see the sharp resonance peak in the middle of the stop band, which is consistent with the experimental results in .
The script shift_period_sweep.lsf can also calculate the spectrum for a different number of grating periods. Since scanning the number of periods does not require re-calculating the modes, one can obtain the transmission spectrum for an arbitrary number of grating periods with very little additional computation time. The figure below shows the transmission for the same Bragg grating with different numbers of grating periods. One can see that the resonance peak becomes sharper as the number of periods increases, but eventually disappears when the number of periods is very large.
1. P. Prabhathan, et al., “Compact SOI nanowire refractive index sensor using phase shifted Bragg grating", Optics Express, Vol. 17, No. 17, 2009 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100545.7/warc/CC-MAIN-20231205041842-20231205071842-00187.warc.gz | CC-MAIN-2023-50 | 2,744 | 9 |
https://community.ptc.com/t5/Arbortext/Elements-out-of-Context/td-p/83449 | code | I'm working on a file that uses the MIL-STD-3001 XML standard. When I open the file in Arbortext, it turns context rules off, telling me that an element is out of context. Looking at the DTDs, that element is a valid child element of the parent that contains it. Deactivating and reactivating the style sheet sometimes allows me to turn context rules back on, but the problem comes back when I open the file. Has anyone else had this experience? Any troubleshooting tips?
I have not seen that particular behavior. I've worked with MIL-STD-40051 and 38784.
Most of the MIL-STD are SGML-based. Have you tried recompiling your DTD? When working in a different versions of Arbortext, recompiling the DTD can correct context issues. Arbortext will sometimes ask you to recompile especially if you have changed versions of the software. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510387.77/warc/CC-MAIN-20230928095004-20230928125004-00061.warc.gz | CC-MAIN-2023-40 | 830 | 3 |
https://gamrconnect.vgchartz.com/post.php?id=9220624 | code | I think it would be great if Microsoft kept Bethesda exclusive to Xbox. It would boost console sales for Microsoft and it would force Sony to bring their A game with their own exclusives. It would be like ps3 vs 360 all over again where we were missing out if we didn't own both consoles. Only thing is that Microsoft doesn't seem that interested in winning a console war. They are looking at a much bigger picture. A future where the whole connected world will be playing Gamepass in the cloud. | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00376.warc.gz | CC-MAIN-2021-10 | 495 | 1 |
https://xoops.org/modules/publisher/item.php?itemid=5982&com_id=57864&com_rootid=57864 | code | The new version of wgTimelines
is ready for testing.
wgTimelines module for XOOPS CMS enables you to create beautiful timelines for your XOOPS website.
New in version 1.12
- change to namespaces
- added feedback form
- added image editor (like in wgGallery)
- added test data (espcially for alain01
With new image editor you can crop image, create image grids,...
Download under https://github.com/ggoffy/wgtimelines
Please post bugs and suggestions for improvement under https://github.com/ggoffy/wgtimelines/issues | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00726.warc.gz | CC-MAIN-2020-40 | 516 | 11 |
https://blogs.oracle.com/database/ace-directors-discuss-key-features-in-oracle-database-19c-v2 | code | Oracle ACE Directors Rich Niemiec, Julian Dontcheff and Jim Czuprynski share their thoughts on the Automatic Indexing and SQL Quarantine features in Oracle Database 19c.
Scaling in Oracle Database
Oracle Database 19c (release 19.4 and later)
now includes the ability to dynamically scale...
Oracle Cloud offers users who want to create web applications
but are not professional software developers low-code development | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348496026.74/warc/CC-MAIN-20200605080742-20200605110742-00575.warc.gz | CC-MAIN-2020-24 | 418 | 6 |
http://mamutopia.blogspot.com/2010/01/new-project-for-new-year.html | code | Project number one for 2010:
crocheting a granny square blanket.
This is actually on my list for ages (haha, almost embarrasing!), but now I really want to get myself started. After some research I decided on the colours: the main colour will be a bit off-white. I teared out the picture above from a magazine and keep it as an example, I absolutely love it! It's by the way made by Wood & Wool Stool.
Now I'm putting several colours together to see how they look like and then I just should get started I guess :) | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191405.12/warc/CC-MAIN-20170322212951-00446-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 514 | 4 |
https://www.bleepingcomputer.com/forums/t/416724/trying-to-get-back-my-fast-acer-laptop/ | code | My primary question is how to speed up Internet Explorer 9. When I first installed it, it was extremely fast. I was very impressed because that is what it actually advertises as it's best feature. I do not use toolbars in IE and the minimal add-ons so I can not find out why it has slowed down so much in the last month.
I would also like to know what processes/services are not needed in Windows 7? When my computer gets sluggish I go into the task manager and see so many things running that I am not familiar with. I have used the hijackthis tool a couple times to stop some things but they seem to come back. I am attaching my lastest hijackthis log to this post. I have a bunch of entries with 'missing' files and I have no idea why. I have not uninstalled anything since I ran the tool the last time.
I'd appreciate any advice and insight into this Windows 7 animal.......
Mod Edit: Removed HJT log, since that is a malware tool and is not to be used for system troubleshooting...and there is no indication that your topic should be moved to one of our malware forums ~ Hamluis.
Edited by hamluis, 30 August 2011 - 11:15 AM. | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00517.warc.gz | CC-MAIN-2018-43 | 1,130 | 5 |
https://stackshare.io/stackups/haproxy-vs-kong | code | HAProxy vs Kong: What are the differences?
HAProxy: The Reliable, High Performance TCP/HTTP Load Balancer. HAProxy (High Availability Proxy) is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications; Kong: Open Source Microservice & API Management Layer. Kong is a scalable, open source API Layer (also known as an API Gateway, or API Middleware). Kong controls layer 4 and 7 traffic and is extended through Plugins, which provide extra functionality and services beyond the core platform.
HAProxy and Kong are primarily classified as "Load Balancer / Reverse Proxy" and "Microservices" tools respectively.
"Load balancer" is the top reason why over 118 developers like HAProxy, while over 28 developers mention "Easy to maintain" as the leading cause for choosing Kong.
Kong is an open source tool with 22.4K GitHub stars and 2.75K GitHub forks. Here's a link to Kong's open source repository on GitHub.
Instagram, Dropbox, and Medium are some of the popular companies that use HAProxy, whereas Kong is used by Checkr, Policygenius, and Decision6. HAProxy has a broader approval, being mentioned in 457 company stacks & 211 developers stacks; compared to Kong, which is listed in 50 company stacks and 14 developer stacks.
What is HAProxy?
What is Kong?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using Kong?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
I use Kong because it reliably proxies traffic quickly with an assortment of pluggable features. The engineers behind the product are of the highest quality. The Company has cultivated the largest active open source community of any API gateway. They generally squash bugs in hours or days not weeks/months. Company engineers help community members through social avenues as well as supporting large enterprise. They heavily value their product and individuals as opposed to just solely growing enterprise license fees.
We needed a lightweight and completely customizable #microservices #gateway to be able to generate #JWT and introspect #OAuth2 tokens as well. The #gateway was going to front all #APIs for our single page web app as well as externalized #APIs for our partners.Contenders
We looked at Tyk Cloud and Kong. Kong's plugins are all Lua based and its core is NGINX and OpenResty. Although it's open source, it's not the greatest platform to be able to customize. On top of that enterprise features are paid and expensive. Tyk is Go and the nomenclature used within Tyk like "sessions" was bizarre, and again enterprise features were paid.Decision
We ultimately decided to roll our own using ExpressJS into Express Gateway because the use case for using ExpressJS as an #API #gateway was tried and true, in fact - all the enterprise features that the other two charge for #OAuth2 introspection etc were freely available within ExpressJS middleware.Outcome
We opened source Express Gateway with a core set of plugins and the community started writing their own and could quickly do so by rolling lots of ExpressJS middleware into Express Gateway
We're a small startup in San Francisco (team of 18 people). After spending lots of time building our core technology, it was time to bring it to life and deploy with several very large customers (500+ API requests/customer/minute).
We looked for a solid API management solution that would allow for easy authentication, quick installation and great logging features (requests and responses). After looking at various (very) expensive solutions out there, we ran into Kong.
After testing it for a few days, we deployed quickly to production to serve the needs of our customers. 3 weeks in, our experience has been great. Highly recommended to anyone who's looking for API management solutions.
P.s. Scored "Reliability" as "OK" for now with lack of data. Will definitely update once we've had Kong in production for a longer period of time.
We use HAProxy to load balance between our webservers. It balances TCP between the machines round robin and leaves everything else to Node.js, leaving the connections open with a reasonably long time to live to support WebSockets and re-use of a TCP connection for AJAX polling.
HAProxy manages internal and origin load balancing using KeepaliveD. Two small servers host the entire site, never moving about 15% load even during the largest load spikes.
We use HAProxy to balance traffic at various points in our stack, includgin nginx nodes on different physical machines, and api nodes on the backend.
I use HAproxy primarily for application routing and SSL termination. I also use its logs and statistics to visualize incoming traffic in Kibana.
We use HAProxy to load balance web requests for our web application, but also for some internal load balancing of microservices.
And if developper can also code the load balancer ? Add plugin, dynamically change backend, Kong give this versatility | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00232.warc.gz | CC-MAIN-2019-47 | 5,166 | 28 |
https://wheatoncollege.edu/academics/departments/physics-and-astronomy/research-facilities/ | code | Student research opportunities are central to our mission. We welcome students to collaborate with faculty in our research labs, whether they are senior physics majors or undecided first-year students.
Student-faculty collaborations are supported in several different ways:
- Wheaton Research Partnerships and Summer Research Fellowships
- Wheaton Foundation Grants and Filene Center travel support
- Rhode Island Space Grant Fellowships
- National Science Foundation
- Grants from industrial partners
A number of students also participate in Research Experiences for Undergraduates at other institutions. Many student projects have been presented at the spring Wheaton Academic Festival, at local and national conferences, and in professional publications.
A sample of recent research projects:
- Summer 2016: Yuying Sun. “Analyzing the motion of a chaotic pendulum”. Funding: Wheaton Merit Scholarship awarded to Yuying.
- Summer 2016: Macgregor Sullivan. “Assembling and testing an astronomical spectrograph”. Funding: Wheaton Merit Scholarship awarded to Mac.
- Summer 2016: Raymond Zhang. “Analyzing the motion of Barnard’s Star”. Funding: NASA/RISG (Proposal PI: Maitra, co-PIs were Profs. Jenni Lanni and Laura Ekstrom in Biology).
- 2015-16: Aaron Portanova. “Designing a full-dome digital projection unit”. In collaboration with Prof. Tim Barker.
- 2015-16: Madison Borrelli. “Designing a sundial customized for Wheaton’s location” Funding: WRP.
- Summer 2015: Madison Borrelli. “Modeling spectral energy distribution of X-ray binaries”. Funding: Mars Faculty/Student Research Grant.
- 2014-15: Allegra Kurtz-Rossi ’15. “Timing analysis of X-ray binaries”. Independent Study.
- 2014-15: John Scarpaci ’17. “Correlation between multi-wavelength light curves of Aquila X-1 during different outbursts”. Funding: WRP.
- Summer 2014: Ryan Dill ’15. “Modeling X-ray spectra of black holes“. Funding: NASA/Rhode Island Space Grant (NASA/RISG).
- Summer 2014: John Scarpaci ’17. “Correlation between multi-wavelength light curves of Aquila X-1 during different outbursts”. Funding: Mars Faculty/Student Research Grant.
- 2013-14: Ryan Farber ’15. “Simulating black hole accretion disks”. Funding: Wheaton Research Partnership (WRP).
- 2013-14: Sean Weinstein ’17. “Analyzing soft-X-ray, hard X-ray, and optical light curves of the neutron star X-ray binary system Aquila X-1”. Funding: WRP.
Dan Demeo ’12 discusses his research | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510903.85/warc/CC-MAIN-20231001141548-20231001171548-00446.warc.gz | CC-MAIN-2023-40 | 2,494 | 22 |
https://sqllive360.com/Events/2012/Sessions/Wednesday/SQW9-Master-Data-Services-in-SQL-Server-2012.aspx | code | Director of Emerging Technologies
Perpetual Technologies, Inc.
Most DBAs at one time or another have been taught the importance of master data management (MDM), which is comprised of a set of processes and tools in order to keep your non-transactional data in a consistent state. However, with today's fast paced environment and tightening budgets most DBAs lack the resources to properly implement it. With the upcoming release of SQL Server 2012, Microsoft has taken their second swipe at providing a tool known as Master Data Services (MDS) to aid the DBA in this endeavor. In this session, you will learn the how Master Data Services is implemented in SQL Server 2012, setting up models, working through the Excel plug-in, and utilizing DQS and business rules to help with that dirty data. This session will definitely be an eye opener for some that were on the fence with the first release, and those that are coming into the platform with a fresh set of eyes!
You will learn:
- The basic principles that revolve around Master Data Management as it pertains to the Master Data Services platform in SQL Server 2012
- How to implement a basic Master Data Services project from setup to model creation
- How to leverage external tools like the Excel Add-in and Data Quality Services (DQS) to enhance the Master Data Services experience | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474700.89/warc/CC-MAIN-20240228080245-20240228110245-00096.warc.gz | CC-MAIN-2024-10 | 1,337 | 7 |
https://answers.sap.com/questions/1698485/b1de-12-installation-issue.html | code | I am posting this topic for information rather than as a question.
I have been using B1DE 1.1 for a while without any problems.
Having installed B1DE 1.2 on two patched-up XP SP2 machines, I have found the following:
1. Notebooks can no longer hibernate or suspend. The even log claims the process preventing this is svchost.exe, but this is a red herring.
2. No other software installations are possible. For instance, Microsoft security updates fail when installing. But the same goes for other SAP software installations.
On further inspection, it turns out that the 1.2 installation appears incomplete to the Windows Installer. The symptom of this is that a couple of msiexec.exe processes are started after every reboot. These need to be killed if hibernation/suspension requests are to be successful. Also, once killed, further software installations can go ahead, though in the course of installation, the Windows Installer requests that actions from an incomplete installation of the SAP Business One Development Environment are rolled back. Doing this seems to have no effect on B1DE which is good I suppose.
The only solution appears to be to uninstall B1DE 1.2 from Add/Remove Programs.
I realise that B1DE is an unsupported piece of software, but it would be great if the installation routine problem can be looked into. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506481.17/warc/CC-MAIN-20230923130827-20230923160827-00493.warc.gz | CC-MAIN-2023-40 | 1,332 | 8 |
https://www.blackhatworld.com/seo/facebook-auto-reply-script.346721/ | code | I have been re-vamping a customers social site this week and thought I'd share a little auto-reply tool that is being utilized. The FB AutoReply What does it do? - At the current moment it is setup to auto-reply to any b-day messages that you get. Since this is normally the largest influx of comments that you recieve - it is currently setup for such, however it can easily be edited to auto-reply to any keyword phrase. The current keyword phrase is "happy birthday". The system searches the posts for any updates, and once the keyword is found, it will auto-reply back with a pre-defined comment. If you guys run into any trouble, let me know. | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746639.67/warc/CC-MAIN-20181120191321-20181120213321-00228.warc.gz | CC-MAIN-2018-47 | 646 | 1 |
https://sqlcoffee.com/Tips0026.htm | code | How to manually uninstall SQL Server.
Applies to: All editions of Microsoft SQL Server 2008, Microsoft SQL Server 2008
R2, Microsoft SQL Server 2012, Microsoft SQL Server 2014, Microsoft SQL Server
2016, Microsoft SQL Server 2017, and Microsoft SQL Server 2019.
If you need to manually uninstall SQL Server 2005, please refer to Microsoft
If you are trying to uninstall SQL Server 2012 and you receive the error message
"The operating system on the computer does not meet the minimum requirements for
SQL Server 2012", please perform the following procedure as mentioned here:
- Open File Explorer or WIndows Explorer and go to the folder C:\Program
Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012.
- Make a right click on the Setup.exe file, select Properties, click on
the Compatibility tab, click on the checkbox name "Run this program in
compatibility mode for", from the combo box select "Windows Vista (Service
- Go to Control Panel, Select "Uninstall a program" and uninstall SQL
If you are not able to uninstall SQL Server please try the procedure
You have tried everything and you are still unable to uninstall SQL Server,
please to manually uninstall SQL Server using the following procedure:
1. Uninstall all SQL Server components you can using Control Panel ->
Programs and Features
2. Backup the registry.
3. Delete the following keys in regedit:
--HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server
4. Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall
and delete all the sub-keys referencing SQL Server.
5. Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services and delete all the
keys referencing SQL Server.
6. Rename all the SQL Server folders in the computer like C:\Program
Files\Microsoft SQL Server and subfolders. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00689.warc.gz | CC-MAIN-2023-06 | 1,787 | 28 |
http://stackoverflow.com/questions/3350113/file-is-not-being-accessed-in-php-script-in-yahoo-small-business-domain | code | hi every one i'am having problem with include/require_once in PHP by which iam accessing a configuration file to connect database.
This problem exists in yahoo small business servers where i hosted my site in a free host it works fine for the same code.
the database connection is working fine but the including of files are not working even not showing an error.
Please help me out to solve this issue.
If this question is to placed on other location please lead me there.
<?php session_start(); require_once("classes/DbConfig.php"); $object_db = new DB(); $object_db->open_connection(); ?>
The above is the code for accessing the DbConfig.php file which is in classes folder and the below is for creating instance of class and opening the connection which is a function in that class.
here i used include also but there is no display of any error.
i checked that if that file exists or not using the following code.
<?php if(file_exists("classes/DbConfig.php")) echo "File Found"; else echo "File Not Found"; ?>
it displays "File Found".
And there are no error logs in the domain control panel | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768529.27/warc/CC-MAIN-20141217075248-00031-ip-10-231-17-201.ec2.internal.warc.gz | CC-MAIN-2014-52 | 1,095 | 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.