url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.wednesday.is/engineering-case-studies/full-stack-development-for-a-insuretech-company-in-europe
code
Full Stack Development for a InsureTech company in Europe The customer wanted to build a platform to sell insurance policies direct to customers via an interactive web and mobile application. The policies were a combination of life, health, and income insurance. These were created by collaborating with industry leading insurance companies in these sectors. The platform had to automate functions such as accounting, billing, policy document generation, and payments. A configuration management system had to be built to allow generation of connectors to talk to legacy systems from partner insurance providers. AWS Route 53 Each legacy system from a partner insurance company had a different interface. The platform needed to be built such that integrating with any legacy system could be made possible using a configuration management and DevOps pipeline. A reusable UI component library had to be built. This library had a set of components that adhered to the design guidelines for the application. Data needed to be encrypted and stored securely. Wednesday setup a team of 4 senior software engineers, a designer, and a project manager to work on the project. The design team worked with the customer to understand their customer and their pain points. Wireframes and low fidelity mockups were created to validate if users find it easy to use followed by high fidelity designs. The engineering team worked on creating a connector ecosystem where external legacy systems can be mapped to the api interface that the platform understands. Talk to us Psst! Listen to our podcast The Wednesday Show here ↗ The Wednesday Monthly Once a month we will visit your inbox. We will share our learnings on digital product design, development, and marketing. We keep it real and honest. Thank you for subscribing! Once a month we share our learnings about product strategy, development, and design. Stay tuned. Oops! Something went wrong while submitting the form.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00237.warc.gz
CC-MAIN-2022-49
1,958
17
https://lists.fsci.org.in/hyperkitty/list/[email protected]/thread/SDCWUEMJC64PL27RCRH6KWSJDVBIKOAV/
code
Harish Narayanan said on Fri, May 07, 2004 at 12:03:03PM -0400,: A primary reason they aren't as wide spread are because of societal inertia, Do I want it installed? No. Do I need it installed? Not really. Am I one of the 210 million who paid for it anyway? Yes. Did I have much of a choice? Yes and no. You had a choice, did not exercise it, and blame `intertia'? Huh?? Did you say `What about the greenbacks I paid?'? We have a restrictive trade practises law much stronger than that of US of A. I could have tried really hard to find a company that Inertia, is the word. The largest and oldest hardware-cum-software vendor in the world supports GNU/Linux on their PCs. May be you did not ask the right RedHat is a major supporter of many free software and doesn't ship anything proprietary with Not very sure of that. Maybe, going by hearsay, this holds good for the free-as-in-beer version of RH which we used to get till sometime back. Cannot say the same for the paid version. I have not used the paid version, so dunno. Hope somebody will clarify on whether paid isos from RH contain non-free binaries. Tha apart, a major component of the RPM system is considered non-free by Debian since its license imposes burdens on users. I respect them for this, Mahesh T. Pai, LL.M., 'NANDINI', S. R. M. Road,
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474641.34/warc/CC-MAIN-20240225171204-20240225201204-00104.warc.gz
CC-MAIN-2024-10
1,306
26
http://stackoverflow.com/questions/1980477/setup-mfc-project
code
A word or two about project I have to make. I have a list of products (XML formatted), and I have to make a bar code of that list. Here are the requirements: - Technology has to be MFC, VS 2005 or VS2008 - All functionality must be in one dll - Same solution should have a simple tester for the dll For example, my dll has "Write" method which is implemented in a separate file for pdf417 and in a separate file for some other bar code, so that user can choose which bar code to use. Since I have no knowledge of mfc, I really don't know how to even start. I read some tutorials, created the dll with some dummy method, and then tried to use it in tester application, but no luck. I know that this is a "needle in a haystack" type of a question, but if someone could help me how to setup/architect this project I would be very grateful.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164653483/warc/CC-MAIN-20131204134413-00090-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
836
8
http://daejongfilmaward.org/bq-karalis-room-divider/karalis-room-divider-ebay-bq-room-divider/
code
Karalis Room Divider Ebay B&q Room.divider |Include In Article| B&q Karalis Room Divider |Published Date||Friday , April 03rd 2020| room divider screen. Friday , April 03rd 2020. Quote from B&q Karalis Room Divider : One of the principal objectives of room dividers is to use the space in the ideal manner. You are going to be able to relish many positions with your twin lounger at a fantastic price. Pushing all upholster towards the middle of the room is perfect for conversation and engagement. As soon as you have the panels picked out you can begin to apply the decorative touches you desire. Solid panels are less difficult to maintain. While antique screens and room dividers may not be perfect for contemporary bedrooms, both regarding practicality and fashion, an individual can still draw a lot of inspiration from them. Wall dividers can be put almost anywhere and are created of various materials. Wood dividers were hard to discover. One of the most frequent dividers utilised in homes is an easy wall divider. You can create a Multi-Use Divider. If you select a rice paper room divider, this choice is ideal for allowing light to pass through. It is possible to also utilize room dividers as a manner of hiding workspaces. You may even make an offbeat room divider with the usage of some crates! Be sure to pick the most suitable size to fulfill your privacy requirements. Should you need additional information regarding an item or want to find the piece, please contact the Auction Manager listed for that specific sale. Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does LocalHost claim ownership or responsibility for such items and you should seek legal consent for any use of such materials from its owner.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00523.warc.gz
CC-MAIN-2020-50
1,848
11
https://www.tfw2005.com/boards/threads/review-of-rise-of-the-dark-spark-for-ps4.990952/
code
I'm doing one. I've got the game - I won a bunch of bonus credits on the PS4 store and figured what the hell, so I direct downloaded it. I'll periodically update this thread with my thoughts on the game. Note that this is specific to the PS4/XBone version. The Wii U version doesn't have escalation, and the last-gen versions are graphically inferior. Also: I've been one of the most vocal board members AGAINST this game, since it was announced, because it was so obviously just a cash in, recycling assets from the past games and doing almost nothing new at all. I hope I'm wrong. I doubt I am. More to follow!
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817455.17/warc/CC-MAIN-20240419203449-20240419233449-00729.warc.gz
CC-MAIN-2024-18
612
1
https://geekmythospodcast.com/2021/09/01/titans-superman-lois-what-if-spiderman-no-way-home/
code
In this episode we talk about Titans, Superman & Lois, What if …, & Spiderman No Way Home!!!!! So Much to cover and it’s a long one. Hope you enjoy and we hope to see you on the live streams!!! Also, Find us on INSTAGRAM @geekmythospodcast TIKTOK: @geekmythospodcast FACEBOOK: https://www.facebook.com/GeekMythosPo… YOUTUBE: https://www.youtube.com/channel/UCMOh… Twitter: https://twitter.com/mythos_geek Twitch: https://www.twitch.tv/geekmythospod Tell us if you found our Easter eggs in this episode. Or should I say Riddle Stay safe and Be Kind? Injustice anywhere is a threat to justice everywhere- Martin Luther King Jr. Or Find us on all Our Link HERE!! https://linktr.ee/GeekMythosPod
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510529.8/warc/CC-MAIN-20230929222230-20230930012230-00807.warc.gz
CC-MAIN-2023-40
699
2
https://ayurveda.ariakedo.net/en/2021/12/19/i-want-to-get-something-while-traveling/
code
I want to get something while traveling! - Ayurvedic therapist training course, - Yoga training course, and - Kalari (traditional martial arts) training course “I want to heal my mind and body”, “I want to train”, “I want to go sightseeing in Kerala” … don’t your families and friends have different wises to do during vacation? Our program allows you to choose a program that values the purpose of your vacation.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100452.79/warc/CC-MAIN-20231202203800-20231202233800-00488.warc.gz
CC-MAIN-2023-50
429
6
https://www.enotes.com/homework-help/what-does-visitor-say-about-fate-monkeys-paw-520084
code
What does the visitor say about fate in "The Monkey's Paw"? The visitor's explanation of the supposed magical powers of the monkey's paw is very brief. He says virtually everything he knows about it in one paragraph of dialogue. "It had a spell put on it by an old fakir," said the sergeant-major, "a very holy man. He wanted to show that fate ruled people's lives, and that those who interfered with it did so to their sorrow. He put a spell on it so that three separate men could each have three wishes from it." This seems like the author's way of getting into yet another story about a man who has three wishes and makes a hash of them. It is hard to understand how anyone could avoid having wishes or desires. Are all of these to be regarded as interfering with fate? We would end up doing nothing if we didn't try to attain at least some of our desires. The author, W. W. Jacobs, has the old fakir put his spell on a monkey's paw so that it will seem exotic to the Whites. There are no monkeys native to Britain. So at least that seems to establish part of the story, that it came from India. If Mr. White got hold of the monkey's paw and decided to make a wish with it, wouldn't that be part of his fate in itself? Wouldn't it have been preordained by fate that he would do just that? Even if we change the scripts of our lives, couldn't it have been preordained that we would change those scripts?
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809778.95/warc/CC-MAIN-20171125105437-20171125125437-00792.warc.gz
CC-MAIN-2017-47
1,405
5
https://nitinsawhney.org/2021/08/13/hcid/
code
Intersection of computing, interaction technologies and human-centred design research. Human-Computer Interaction and Design (HCID) is a newly established research area in the Department of Computer Science at Aalto University, that I helped found with other faculty colleagues in our department in 2021. HCID is a crucial research area examining the intersection of computing, interaction technologies and human-centred design research. HCID engages transdisciplinary approaches to conducting critical research, design, development and evaluation of current and novel technologies, behaviors, processes, models, practices and experiences among human, artificial and non-human entities, while examining the impacts on people, society and ecology. HCID draws from emerging research, scholarship and practice across disciplines including computer science (computation, software engineering, machine learning, AI, interaction design), cognitive science, psychology, sociology, anthropology, media, arts, design, learning/education, ethics, and public policy, among others.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644817.32/warc/CC-MAIN-20230529074001-20230529104001-00165.warc.gz
CC-MAIN-2023-23
1,069
4
https://ieor.berkeley.edu/jobs/research-development-role-at-headlands-technologies-llc/
code
Research Development Role Research Developers are responsible for utilizing and extending our proprietary research and trading platforms to develop highly performant quantitative trading strategies. This includes using methods from statistics and machine learning on a distributed research cluster to determine optimal models and strategy parameters. They use Java and C++ to maintain and enhance our research and trading platforms. Working with members of our Platform Development and Trade Operations teams, they facilitate deployment of new strategies and features to our production trading environment. Additionally, they monitor all aspects of strategy performance, including P&L, model performance, and latency in order to drive improvements in our research and trading processes. · Bachelors, masters, or PhD in a technical field (e.g., computer science, mathematics, physics) or professional quantitative trading experience · Demonstrated interest in quantitative trading · Fundamental knowledge of software engineering, including superior ability to write in Java and C++ · Exceptional quantitative knowledge and problem solving ability · Ability to work both independently and in a collaborative team setting
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816875.61/warc/CC-MAIN-20240414064633-20240414094633-00831.warc.gz
CC-MAIN-2024-18
1,223
7
https://mariatta.ca/posts/ice_cream_selfies/2018/djangoconus_2018/
code
Ice Cream Selfie @ DjangoCon US 2018 Hammond’s Gourmet Ice Cream It was fun visiting San Diego for DjangoCon US 2018. I gave my talk Don’t Be a Robot; Build the Bot. I think it was well received 😊 This was also a special travel trip, because, well it happens that my husband also needed to travel for work at the same time, so we don’t have anyone to look after our two kids. I ended up taking my kids with me to the conference. Thanks to DjangCon US for accommodating, and also helped with childcare situation so I could give a keynote at this conference. After my talk, I took my kids to get ice cream at Hammond’s Gourmet Ice Cream. comments powered by Disqus
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100762.64/warc/CC-MAIN-20231208144732-20231208174732-00293.warc.gz
CC-MAIN-2023-50
673
7
http://search.livetor.ru/?query=free+download+windows+7+home+premium+32+bit+product+key
code
This is the Official Windows 7 Home Premium ISO Download with service pack 1 (SP1) from MSDN with the Windows 7 product key. Softlay gives the free single-click direct download of Windows 7 Home Premium ISO full version for both 32 bit and 64 bit. Note: All Windows 7 product keys and activation keys that are shared below are only for students. We are requesting, please purchase genuine product keys from Microsoft Official Site. Product key for Windows 7 Home Premium 64-bit.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039749562.99/warc/CC-MAIN-20181121173523-20181121195523-00441.warc.gz
CC-MAIN-2018-47
478
2
https://boffosocko.com/tag/blogging/
code
I check my blog every day, not through vanity (I don't have stats) but out of interest to see what's in the "on this day" section. It's why I added it after all. There has been discussion for some time about how the default, reverse chronological view isn't very effective as we just funnel readers t... A personal blog is an online journal, your day to day thoughts published on the web rather than in (or in addition to) a physical notebook. It is an unfinished story, a scratch pad, an outboard brain; and while there are highlights it is more the journey that’s the important aspect. Colin nibbles around the edges of defining a digital public commonplace book and even the idea of “though spaces” though without tacitly using either phrase. –November 20, 2019 at 09:20AM If this blog had a tagline it would be "an ongoing conversation with myself." I wanted to talk about blogchains, or threads, and Elder-blogging in "Blogging for now" but couldn't remember where I'd read about it. Chris Aldrich's post "On blogging infrastructure" reminded me. It was an idea formulate... This post is part of Blogging Futures, a collaborative self-reflexive interblog conversation about the future of blogging. Feel free to join the conversation! To make conversations more weblike than linear, more of a garden and less of a stream, to create “a broader web of related ideas”. These sentiments from Chris Aldrich resonate with me. But how do we achieve this? The fact that there is no “silver bullet” is the exciting part. I’ll agree that there is no silver bullet, but one pattern I’ve noticed is that it’s the “small pieces, loosely joined” that often have the greatest impact on the open web. Small pieces of technology that do something simple can often be extended or mixed with others to create a lot more innovation. –November 17, 2019 at 02:35PM The final prompt is looking back on the conversation that has grown on the blogchain... What have you learned from reading or participating? Primarily I’ve been heartened to have meet a group of people who are still interested in and curious about exploring new methods of communication on the web! –November 17, 2019 at 02:41PM Is there a particular project you want to pursue? Though I joined late, the course has spurred me to think about the concepts of mixing blogchains with webmentions, and resparked my interest in getting wikis to accept webmentions as well for building and cross-linking information. –November 17, 2019 at 02:42PM Blogging is great, but it sometimes feels like every blog is an island. To have a robust blog society requires connection, community, conversation. Part of the problem is we don’t have many great ways to connect blogs together into larger conversation structures. I suspect this response (part read post, part annotation post, part reply, and with Webmentions enabled) will be somewhat different in form and function than those in the preceding conversations within the blogchain, but I offer it, rather than the standard blogpost or even reply, as the sort of differently formed response that blogging futures suggests we might experimentally give. Sure we have hyperlinks, and even some esoteric magic with the likes of webmentions. But I want big, simple, legible ways to link blog discussions together. I want: blogging megastructures! In practice, building massive infrastructure is not only very difficult, but incredibly hard to maintain (and also thus generally expensive). Who exactly is going to maintain such structures? I would argue that Webmentions aren’t esoteric, particularly since they’re a W3C recommendation with several dozens of server implementations including support for WordPress, Drupal, and half a dozen other CMSes. Even if your particular website doesn’t support them yet, you can create an account on webmention.io to receive/save notifications as well as to send them manually. –November 17, 2019 at 02:14PM Cabinet: one author or several; posts curated into particular collections or series’, often with thematic groupings, perhaps a “start here” page for new readers, or other pointers to specific reading sequences Colin Walker has suggested something like this in the past and implemented a “required reading” page on his website. –November 17, 2019 at 02:18PM Chain: perhaps the simplest collaborative blogging form; a straightforward back and forth exchange of posts exploring a particular topicMesh: like a chain, but with multiple participants; still a legible structure e.g. alternating / round-robin style, but with more possibilities for multiplicity of perspectives and connections across postsFractal: multiple participants and multi-threaded conversation; more infinite game branching; a possibly ever-evolving and mutating conversation, so could probably use some kind of defined endpoint, maybe time-bound In the time I’ve been using Webmentions, I’ve seen all of these sorts of structures using them. Of particular interest, I’ve seen some interesting experiments with Fragmentions that allow one to highlight and respond to even the smallest fragments of someone’s website. –November 17, 2019 at 02:20PM I tend to think of blogging as “thinking out loud”, a combination of personal essay, journaling, brainstorming and public memo. Another example in the wild of someone using a version of “thinking out loud” or “thought spaces” to describe blogging. –November 17, 2019 at 02:25PM Baroque, brutalist, Borgesian — let’s build some blogging megastructures. Take a peek at https://indieweb.xyz/ which is a quirky and interesting example of something along the lines of the blogging megastructure you suggest. –November 17, 2019 at 02:27PM This bafflingly huge waste of my time (and yours) was prompted by two seemingly unlinked events. (Of course, no two events are truly unlinked, but let that ride.) First, there was a bafflingly stupid "article" from Mother Jones: Let’s Remember Some Blogs – Mother Jones. Why stupid? Because as fa... I’ve been reading through a series of essays on Blogging Infrastructure that are part of CJ Eller’s Blogging Futures. There are some interesting ideas hiding in there including the idea of a blogchain, which appears to have originated on Venkatesh Rao’s site Ribbon Farm. As best as I can tell it amounts to linking series of blog posts by potentially multiple authors into a linear long form piece. It reminds me of the idea of a webring, but instead of being random (though some may have historically been completely linear in nature), they’ve got slightly more structure, and instead of linking entire websites, they’re linking posts on a particular idea or topic. I’ve also seen some tangential mentions among the Blogging Futures crowd of Webmention, which is essentially a standardized web technology that allows notifications or @mentions between websites on different domains and running completely different software. I know that Tom Critchlow, who is a memeber of the blogchain, has recently set up webmentions, so I’m curious to hear his impression of what a blogchain means after he’s begun using webmention. (Difficultly, he’s using a static site generator, which will tend to make his experience with them a tad more fraught compared with services that have it built in or available by simple plugins.) To me there’s more value in combining the two ideas of Webmention and blogchain wherein each post is able to webmention the other posts within a particular blogchain and thereby create a broader web of related ideas. Of course this is all very similar to ideas like IndieNews and Kicks Condor’sIndieWeb.xyz aggregation hub which allow users to post to them by means of Webmention. In some sense this allows for a central repository or hub that collects links to all of the responses for those who want to to participate. These responses could obviously be sorted by topic (aka tag/category), author, and even date. Naturally if each post includes links to all the other pieces in such a blogchain, and all the sites accept and display webmentions, then there will be a more weblike chain of discussion of the topic rather than a more linear one. I’m not aware of it being done, but I’ve always sort of wished that someone would add webmention support to a wiki platform. Many has been the time I wish I could have added a link into the See also section of the IndieWeb wiki simply by linking to a particular page and sending a webmention. Lots of my online documentation references that wiki and it would be wonderfully useful for links to my content to automatically show up there. Later, others could add some of my content back into the wiki in a more fully fleshed out way, but at least the references would be there. Imagine how the world’s knowledge would be expanded if a larger wiki like Wikipedia had the ability to accept incoming links this way!? I’ll mention that both the aggregation hubs and the wikis can help to serve as somewhat more centralized means of discovery on the web, which also helps to fuel idea and content production. All the people I know who have added Webmention have generally fallen in love with it as a new means of posting into and interacting within a rejuvenated blogosphere. There’s more power in posting to one’s own website while still being able to interact in a more social sort of way. For the second prompt of the Blogging Futures course, we want to explore the question of infrastructure of blogs. The discussion has shifted to thinking about how we assess the infrastructure of blogs. This entails not only the infrastructural framework of writing on the web but the mental framework behind it too. For the first prompt of the Blogging Futures course, we want to explore the question of paradigms. At the heart of this course is a simple question: where do we want blogging to go? Embedded in that question is another equally important one: Where do we not want blogging to go? So where do we want/not want blogging to go? Are there paradigms you find useful in exploring these questions? Does writing on the web even exist on such a spectrum for you or is it something more complicated? Along with these questions, there are some paradigms below that could serve as prompts for your own reflection. There have been quite a few articles recently about the importance of the personal site, and the blogging community. It’s a sentiment I’m super excited about. Rian Van Der Merwe has probably the simplest point. Blogs are the front page of the internet, and it’s their freedom that gives them ... For the past 15 years, I’ve included blog assignments in my classes as a default, routine, and generally low-stakes assignment. It began with a simple journal where students kept track of their progress through a video game, and through the years, the assignment has ranged from similarly simple logs or progress reports to the more ornate and decorous “features articles” where students seek to emulate magazine writing and engage with a public audience. At times, like when having a platform online was still a novelty and the adrenaline rush of Web 2.0-fueled activism took flight in the optimism of Barack Obama’s first presidential campaign, blogging totally made sense. As a classroom experience, a blog assignment helped students find their digital identity through written expression. By finding their voice digitally, students found themselves. But while this will still happen, and while I still see brilliant writing from my students, the era when the exigency of a blog assignment can be reliably vindicated by an authentic external audience has ended. It’s time for something else, which means it’s time to re-evaluate what blogs have been and what we have needed them for in order to find the best ways to meet those goals through other means. In this short presentation, I will offer several suggestions. This is, however, an aspirational proposal. I’m writing this between semesters as I reflect on the Fall — where blog assignments didn’t always meet my goals or in some cases arguably undermined other goals for my class — and thinking ahead to the Spring — when I hope to implement some new assignments based on this recent conviction about the ineffectuality of blog assignments. Therefore, by June, my expectation is that I will have something new to report: either finding success with an entirely new set of assignments and corresponding tools, or returning to the familiar embrace of blog assignments with a renewed sense of their value. Most likely, I’ll be somewhere in between, but my hunch is that different forms of discursive content creation will help students take control of their learning and find direction for their digital identities. Whatever I find in the coming semester, I’m confident that I’ll be ready to share some insight into the intents, purposes, and outcomes of inviting students to do intellectual work on the internet of 2019. Notes as they occur to me while I’m watching this video: To me blogging is a means of thinking out loud. Of course having a site doesn’t mean one is blogging. In fact, in my case, I’m collecting bits and pieces on my site like a digital commonplace book, and out of those collections come some quick basic thoughts, and often some longer pieces, which could be called blog posts, but really are essays that help to shape my thinking. I really wish more people would eschew social media and use their own websites this way. We need to remember that a website or domain is FAR, FAR more than just a simple blog. It kills me how many in the edtech/Domains space seem to love memes. It’s always cute and fun, but they feel so vapid and ineffectual. It’s like copying someone else’s work and trying to pass it off as our own. English teachers used to say, “Don’t be cliché,” but now through the use of digital memes they’re almost encouraging it. Why not find interesting images and create something new and dramatically different. (I can’t help but think of the incredibly unique Terry Gilliam “cartoons” in Monty Python’s Flying Circus and the phrase “and now for something completely different…”. Zach uses the phrase “personal learning journal” but doesn’t quite get to the idea of using domains as digital commonplace books. He also looks at other social sites like Tik-Tok, Instagram stories, YouTube channels, and Twitter hashtags, but doesn’t consider that what those things are could easily be contained within one’s own personal site/domain. The IndieWeb has been hacking away at just this for several years now. What he’s getting at here, but isn’t quite saying is “Why can’t we expand the Domain beyond the restrained idea of “just a blog.” And isn’t that just the whole point of the IndieWeb movement? Your website can literally be anything you want it to be! Just go do it. Invent. Iterate. Have fun! Zack should definitely take a look at what one can do with Webmention. See: Webmentions: Enabling Better Communication on the Internet. I suppose some of the restraint is that most people don’t know that it’s relatively easy now to get one domain to be able to talk to another domain the way social sites like Facebook and Twitter do @mentions. And once you’ve got that, there’s a whole lot more you can do! Perhaps what we should do is go back to the early web and the idea of “small pieces, loosely joined“. What can we do with all the smaller, atomized pieces of the web? How can we use these building blocks in new and unexpected ways? To build new and exciting things? What happens when you combine Facebook, Twitter, Instagram, Snapchat, Blogger, Soundcloud, Foursquare, Flickr, Goodreads, Periscope, Lobsters, TikTok, Quora, Zotero, Flipboard, GitHub, Medium, Huffduffer, Plurk, etc., etc. altogether and mix them up in infinite ways? You get Domains! You may get something as cutting edge–but still relatively straight-laced–as Aaron Parecki’s website, which you might have to dig into to realize just how much he’s got going on there, or you might end up with something as quirky and cool as Kicks Condor’s site or his discovery/syndication channel Indieweb.xyz. So this website finally had an 11 year overdue overhaul. Total redesign and optimisation. If you need yours sorting out, talk to Thatch, who did this one – he did such a great job. Have a rummage around to behold the goodness and read all of the words. There’s a bit of me that feels like announc... Simplenote is powered by Automattic, which also runs WordPress.com — so as you can imagine, we love blogging. I’ve written for a few different sites, some using WordPress and some not. No matter where I publish my posts, I have a great, consistent writing experience by drafting in Simplenote fir... Lurking, although the word seems to imply a negative connotation, has usefull aspects nonetheless. It is a way of determining rules of behaviour for new comers to a group. The most obvious characteristic of a lurker is that he’s at the fringe of a group, listening and observing. Being at the fringe may seem like a bad place from the core, but in fact is a good position to build bridges to other groups, and be aware of other groups in the vicinity. In a face to face setting like a pub or a meeting of some kind, a lurker is visible, often shortly introduced after which the focus of attention shifts to the established group members again. In on-line settings things are different. In some fora lurkers are encouraged to introduce themselves and then adviced to lurk, i.e. observe and learn for a while. But at all times there is no way of knowing how many lurkers are there that you are unaware of. As lurkers are possible bridges to other groups, I as a blogger, would like to know: How many lurkers I have, who read my blog but don’t comment or post. Who they are Serverlogs can give some clues, and I keep a close watch on them. Dave Winer’s RSS-tool also brings new info to light.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00298.warc.gz
CC-MAIN-2019-47
18,138
78
https://www.pembe.io/blog/etrade-backtesting
code
Unlock the Potential of E*TRADE Backtesting for Your Investments In the realm of investment, the ability to forecast the performance of a trading strategy is invaluable. E*TRADE Backtesting offers investors this exact opportunity. By leveraging historical data, traders can gauge how their strategies would have fared in past market conditions, providing insights for future investments. - Learn what E*TRADE Backtesting is and how it can benefit your trading strategy. - Understand how to access and use backtesting tools on E*TRADE. - Discover strategies for more effective backtesting and common pitfalls to avoid. - Get answers to frequently asked questions about E*TRADE Backtesting. What is E*TRADE Backtesting? Backtesting is a pivotal tool for traders looking to optimize their strategies. It involves simulating a trading strategy using historical market data to evaluate its efficacy. How to Access E*TRADE Backtesting Tools To access E*TRADE's backtesting tools, log in to your account and navigate to the trading tools section. You should find resources dedicated to strategy testing. Importance of Backtesting Strategies Backtesting allows traders to: - Test the viability of trading ideas. - Understand potential risk and reward. - Gain confidence in their trading strategies. Defining Your Trading Strategy for Backtesting A trading strategy is only as good as its definition. Clearly lay out the conditions, rules, and parameters, including: - Entry and exit points - Trade sizes - Stop loss and take profit levels Step-by-Step Guide to Backtesting on E*TRADE Setting Up Your Backtesting Parameters Configure your backtesting settings to mirror the conditions in which you plan to trade. Selecting the Right Historical Data Choose data that reflects the same market conditions your strategy is designed to navigate. Executing the Backtest Run the simulation and observe how your strategy performs across different time frames and market scenarios. Analyzing Backtesting Results Effective analysis requires attention to detail in: - Performance metrics - Risk assessment - Consistency in different market conditions Understanding Performance Metrics Metrics to watch include: - Profitability: Total profits vs. losses. - Risk/Reward Ratio: Potential reward compared to potential risk. - Drawdown: The largest drop from a peak. Strategies for Effective Backtesting - Use a variety of market conditions. - Test over a significant data sample. - Refine strategies based on results. Common Pitfalls in Backtesting - Overfitting to historical data. - Ignoring trading costs. - Failure to account for market changes. Leveraging E*TRADE's Robust Backtesting Features E*TRADE offers features that help in refining your strategies such as: - Adjustable parameters for comprehensive testing. - A selection of indicators and historical data. - Visualization tools for easier analysis of results. Utilizing Advanced Tools and Indicators Explore advanced tools and indicators like moving averages or RSI to incorporate into your backtesting routines for better strategy refinement. E*TRADE Backtesting: Enhancing Your Trading Strategy Improving Your Approach with Backtesting Insights Utilize insights gained from backtesting to: - Adjust your strategy for different market conditions. - Improve money management techniques. - Tailor strategies to your risk tolerance. Updating Your Strategy Post-Backtesting Based on backtesting results, revise your plan to include new rules or exclude elements that do not contribute to success. The Role of Paper Trading After Backtesting Once a strategy is backtested, paper trading it in real-time markets allows further refinement without financial risk. FAQs on E*TRADE Backtesting How Accurate is Backtesting on E*TRADE? While backtesting can provide valuable insights, it is only as accurate as the data and assumptions made. Can I Backtest Options Strategies on E*TRADE? E*TRADE provides capabilities for backtesting a variety of strategies, including options. How Long Should I Backtest My Strategy? The time frame for backtesting should be long enough to cover various market conditions and cycles. Remember, backtesting is a hypothetical exercise and past performance is not indicative of future results. Use backtesting as one of several tools in your investment toolkit to aid in making educated decisions about strategy implementation.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816024.45/warc/CC-MAIN-20240412132154-20240412162154-00181.warc.gz
CC-MAIN-2024-18
4,387
70
https://changelog.datapacket.com/ssh-key-management-180515
code
The client panel now allows you to manage SSH key-based authentication for new server installations. This includes adding and removing keys as well as disabling SSH root password login and allowing key authentication only. Adding and removing keys To add or remove keys, or to select the default key, navigate to the Security pane and follow the instructions. Installing your keys When performing a server installation via the client panel, you can choose to install any of your uploaded keys. Server installations requested via our Sales or Support teams always use the default key. If you wish to install different key(s), please specify in your request.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151972.40/warc/CC-MAIN-20210726000859-20210726030859-00215.warc.gz
CC-MAIN-2021-31
656
6
http://meta.stackoverflow.com/questions/tagged/scope?sort=faq&pagesize=15
code
I touched on this quite a bit in our recent blog post, but I want to reiterate it again. It's fine if users post bug reports, requests for support or feature requests here on MSO, there's no need to ... What kind of questions should we ask on Meta Stack Exchange (MSE) and what kind of questions should we ask on Meta Stack Overflow (MSO)? Are we going to continue to ask questions regarding the ... Exhibit A: How does ([^.'\"\\#]\b|^) work? Note that I left a comment stating that http://regex101.com will provide a detailed explanation of each element of his regex. Do we really want to ...
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776432978.12/warc/CC-MAIN-20140707234032-00028-ip-10-180-212-248.ec2.internal.warc.gz
CC-MAIN-2014-23
593
3
https://www.construct.net/en/forum/construct-2/your-construct-creations-23/ld28-70142
code
This is my first Ludum Dare! Here is a link to the Ludum Dare page: http://www.ludumdare.com/compo/ludum-da ... &uid=25406 You play as "Doc", the survivor of a catastrophic lunar accident. You must use your trusty fire extinguisher to propel yourself through space and explore the wreckage of a fallen space station in order to regroup with your crew. Following the "You Only Get One" theme, I went with both control and narrative; you only get one input - click to blast the fire extinguisher and you only get one life to find your crew. The jam went pretty well - though I think future short time frame jams, I will dedicate a little more time to them. I'd like to have gotten a lot more polish and perhaps added other objectives. If you don't want to click through the Ludum Dare page, it's playable on my website here: http://seannoonan.co.uk/games/g_ld28/ Develop games in your browser. Powerful, performant & highly capable.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00499.warc.gz
CC-MAIN-2023-40
930
7
https://pypi.org/project/jurt/0.1.0/
code
Jeff's Unified Registration Tool jurt: Jeff's Unified Registration Tool Facilitate coregistration and spatial normalization of fMRI datasets. Copyright (c) 2018, Jeffrey M. Engelmann jurt is released under the revised (3-clause) BSD license. For details, see LICENSE.txt. Release history Release notifications | RSS feed Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816875.61/warc/CC-MAIN-20240414064633-20240414094633-00583.warc.gz
CC-MAIN-2024-18
431
8
http://slpr.net/programs/youth/robotics-coding/
code
Robotics & Coding Exploring allows you to take a firsthand look into those fields that interest you! If you want to learn more about Coding (social media, webpage design, video game design, etc.) or participate with Robotics, come check us out! A required membership into our Exploring Program will be filled out at the first class, a parents signature is required. The Exploring Program offers students an awareness of other opportunities and programs that they can get involved in. AGS Middle School
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00362-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
501
4
https://www.kdab.com/kdab-contributions-qt-5-6/
code
KDAB contributions to Qt 5.6 Qt 5.6 has just been released! Packed with incredible new features, 5.6 is also the first long term support release of Qt: it will be supported for the next 3 years, giving developers a solid foundation for their current and upcoming projects. Once more, KDAB is confirmed to be the largest independent contributor to Qt, as clearly demonstrated by the commit activity graph in Qt (KDAB is the green one): In this post I will summarize some of the contributions to Qt 5.6 that were developed by KDAB engineers. In no particular order: Support for Windows Embedded Compact 2013 KDAB is fulfilling its role as the maintainer of the Windows Embedded platform. We are glad to announce that Windows Embedded Compact 2013 is now fully supported by Qt 5.6. Many other fixes for Windows Embedded 7 were also landed. High DPI fixes in Qt Quick A crucial feature of high DPI-enabled applications is the ability of react to DPI changes, which may happen for instance if an application’s window gets dragged between two screens with a different DPI. KDAB introduced the required infrastructure for Qt Quick applications to detect DPI changes and redraw their contents to the new DPI, which allows elements such as Text to always match the native display’s resolution. FreeType support for OS X One of the strongest points of Qt-based applications is their ability of matching the native style of the platform they run on. However, in certain scenarios, we may prefer our application to have consistent UIs across different operating systems, even if that means not following the native look and feel. A prominent aspect of this is font rendering: Qt by default uses the platform’s font rasterizer, which means having fonts rendered in a different depending on the OS. However, thanks to KDAB’s effort, it is possible to use FreeType as the font rasterizer on Linux, Windows, and starting from Qt 5.6, OS X too. This makes it possible to build applications with consistent pixel-perfect text rendering across all these operating systems. When it comes to a big library like Qt it is often difficult to find bottlenecks and opportunities for optimization. There are simply too many possible code paths, and applications using Qt may be triggering any subset of them, often in non-trivial ways. These code paths need to be instrumented and benchmarked, and fixes need to be developed in case bottlenecks are found. Then, even when a particular code path doesn’t show up in a profiler, it is important that the codebase, as a whole, gets constantly audited by using state of the art tools (such as clazy, maintained by KDAB engineer Sérgio Martins), in order to remove all possible source of slowness. Last, it is important that Qt’s footprint gets reduced by using the proper C++ constructs that help the compiler generate more efficient and compact code. KDAB is committed to keep Qt at the highest quality. KDAB engineers have provided countless performance improvements to Qt 5.6. The complete list is too long to be reported in this narrow margin, so here’s just a few items: - The meta-object compiler’s (moc) parser has been optimized, and in certain cases, made twice as fast. - Reading integers out of a MySQL database via a prepared statement is now 33% faster. - Application startup on systems using fontconfig is now considerably faster due to a removal of a O(n²) algorithm. - Thanks to clazy, lots of accidental deep copies of containers were removed from Qt, as well as anti-patterns that allocate potential huge temporary containers (such as qDeleteAll(hash.keys())). - Calls to reserve() were added in many places where it was possible to determine the number of elements that were going to be added to containers. - Qt containers and container-like classes gained a number of methods (reverse iterators, key iterators for associative containers, full comparison operators, etc.) that allow to write code which is more efficient and results to less text in the executable. A special mention goes to KDAB’s senior engineer Marc Mutz, who has contributed over 330 commits to the 5.6 release. Thanks, Marc! Qt 3D 2.0 technical preview Qt 3D continues to be a major area of investment for KDAB as we see this as an important area for future growth of Qt. Qt 5.6 sees the introduction of many new features to Qt 3D including (but not limited to): - Instanced rendering – this allows to draw many (potentially dozens of thousands) copies of a mesh in a single OpenGL draw call. This saves a huge amount of CPU time and allows great performance for things like grass, crowds, rocks, particles, and so on. - Primitive restart – another way of being able to draw multiple distinct meshes using a single draw call (for better performance). - Support for many more 3D renderer states including clip planes, stencil operations and blending functionality. - New Input API – easily extensible to other input devices and allows mapping multiple physical devices to logical controllers. - New Logic aspect – allows code to be executed once per frame. Very useful for some use cases and good for prototyping new features in the future. - Lots of API improvements, bug fixes and examples. We are now performing API polishing ready for the release of Qt 3D alongside Qt 5.7 later this year. - As part of the KDAB’s Qt on Android maintainership, more than 100 fixes and improvements were applied all across the board to Qt modules, Qt Creator and qbs. - KDAB is the maintainer for the CMake support in Qt, and for Qt 5.6 we’ve landed several bug fixes. - The QtPurchasing module received several iOS-related fixes. KDAB believes that it is critical for our business to contribute to the Qt framework and C++ thinking, to keep pushing these technologies forward to ensure they remain competitive.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473518.6/warc/CC-MAIN-20240221134259-20240221164259-00193.warc.gz
CC-MAIN-2024-10
5,837
30
https://www.element14.com/community/thread/44757/l/create-library-device-to-use-board-as-component
code
What's the best way to create a device from an existing Eagle schematic/board design? That is, you have an existing board, for which you have the Eagle project, and you now want to make a library component so you can use that board as a component on another board. Example, picked at random: ... and you want to use that board as a component on a new board. Obviously the existing Eagle board project contains a lot of data that could be used in the library component, but I am not seeing a straightforward path to making that happen. I've tried two approaches, neither of which worked well. (Using the example board.) 1. I tried copying salient layers from the Board editor to the Library package editor: Layers Pads, Dimension, tPlace, tOrigins, bOrigins, Drills, Holes - During the group Copy process (Board editor viewing the source board) I got error message: "Can't backannotate this operation. Please do this in the schematic!". This seems bogus, since Copy shouldn't change anything in the source data. Nonetheless, something did appear to get copied to the clipboard ... - During the Paste operation (into the destination library component Package), the initial movable visualization showed the entire board data seemingly about to be pasted (not just the layers I'd selected), but then when I went to actually paste, I got message "Skipped unsuitable objects", and the only things that actually pasted were items on the Dimension and tPlace layers. So, no pads or holes. 2. An alternate approach I tried was to use an image of the board as an underlay in the Package editor, over which to stick all new pads, holes, text and so on. - I made an image of the board's package, either using Eagle layout editor Export > Image, or just a screen shot of the layout - I used import-bmp.ulp, to get this background into the Package editor. This "works", though it converts the bitmap into a huge mess of rectangles and/or lines, on separate layers per color. It's good that they are on separate layers, but to use this background of course it must be visible, in which this proliferation of shapes makes it difficult to pick and move the actual useful new pads, text and so on. - This seems very manual and crude, failing as it does to take advantage of the exact pad positioning, outline, holes, pad and pin names and so on that's already in the existing board project. Is there a better way to go about this?
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00177.warc.gz
CC-MAIN-2020-29
2,412
14
https://koderplace.com/blog/posts/26/few-devops-interview-questions
code
12 a) s3 bucket is an object storage space like google drive..you can store ur files. There you create buckets then upload ur objects into that.. Its highly avalible and durable 99.999999999% 11 ans) in aws we have 3 lbs 1. Classical load balncer simply u create lb attach backend servers to that, if someone hit that lb it will redirect trafic to backend server.. 2. Application Lb,: supports path based routing and adv 3. Network lb: provides more bandwidth compare to above 2, used tcp protocol
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00146.warc.gz
CC-MAIN-2020-29
497
6
https://forum.freecodecamp.org/t/issue-tracker-project-quality-assurance-course/481811
code
So when I log your updated project (from the end of the PUT route), it produces issue_title: 'Faux Issue Title', issue_text: 'Functional Test - Required Fields Only', issue_title: 'Faux Issue Title 2', issue_text: 'Functional Test - Every field filled in', assigned_to: 'Chai and Mocha', issue_title: 'Issue to be Updated', issue_text: 'New Issue Text', The last one is the updated issue (note the time difference). The test does its thing, and then reports an error in the browser console of Error: expected 1634342623050 to be above 1634342623050. Those are times in milliseconds since the epoch and are the times for the first issue (no magic; the ‘050’ matches). So for whatever reason, the test is finding the first issue when it checks for updates and not the updated one. I would check that the GET issue stuff is working as it should be, double check that no issues are getting reordered or new ids assigned, and then maybe try putting the updated issue first with unshift() instead of last with push(). Sometimes you also need to start with a clean DB (I did to no avail). It may also be worthwhile to read the test source in fCC’s repo. These kinds of tests are difficult to debug since they use your project’s POST and GET routes to check the PUT and DELETE routes, so they all have to be working flawlessly to pass. You may also discover the problem by writing your functional tests for the last test.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00439.warc.gz
CC-MAIN-2022-40
1,421
14
https://www.andrew.cmu.edu/user/georgech/
code
I am an assistant professor at Carnegie Mellon University's Heinz College of Public Policy and Information Systems, and an affiliated faculty member of the Machine Learning Department. I primarily work on machine learning for healthcare and for information systems in developing countries. A recurring theme across my work is the use of nonparametric prediction methods in solving temporal or spatial forecasting problems. Since these methods inform interventions that can be costly and affect people’s well-being, ensuring that predictions are interpretable is essential. My book with Devavrat Shah is out: "Explaining the Success of Nearest Neighbor Methods in Prediction" (Foundations and Trends in Machine Learning, May 2018). Despite nearest neighbor methods appearing in text as early as the 11th century in Alhazen's "Book of Optics", it was not until fairly recently that arguably the most general, nonasymptotic theory for nearest neighbor classification was developed by Chaudhuri and Dasgupta (2014). This book goes over some of the latest nonasymptotic theoretical guarantees for nearest neighbor and related kernel regression and classification methods both in general metric spaces, and in contemporary applications where clustering structure appears (time series forecasting, recommendation systems, medical image segmentation). The book also covers some recent advances in approximate nearest neighbor search, explains why decision tree and related ensemble methods are nearest neighbor methods, and discusses the potential for far away neighbors to help in prediction. We also organized a related workshop at NIPS 2017 (slides are available for all the talks). Before joining Carnegie Mellon, I finished my Ph.D. in Electrical Engineering and Computer Science at MIT, advised by Polina Golland and Devavrat Shah. My thesis developed theory for forecasting viral news, recommending products to people, and finding human organs in medical images. I also worked on satellite image analysis to help bring electricity to rural India, and modeled brain activation patterns evoked by reading sentences. Between grad school and becoming faculty, I helped develop the recommendation engine at a predictive analytics startup Celect and then was a teaching postdoc in MIT's Digital Learning Lab, where I was the primary instructor and course developer for a new edX course on computational probability and inference. I enjoy teaching and pondering the future of education! I have previously taught at MIT, UC Berkeley, and in Jerusalem at a program MEET that brings together Israeli and Palestinian high school students. As a grad student, I served on the Task Force on the Future of MIT Education, and my time as a teaching postdoc was all about better understanding the digital learning space. Last updated July 17, 2018. Photo credit: Danica Chang.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00157.warc.gz
CC-MAIN-2018-30
2,859
5
https://github.com/dbsimeonov/hennessy
code
Recreating Hennessy Website - I build it using npm/sasswhere I did not include a lot of mixins. The choice was just because its easier for nesting and more readable(personal preference) - I have used jQuerywhich usually I'm trying to stay away but some of the effects like slideUpwere not possible without it. - For most of the icons I've used Font Awesome 5, and the rest were coppied from Small minimap of the website and my comments - Starting with the Homepage - I've included lots of functionallity. The modals for Searchare created with jQueryand the simplest and awesome effect slideUp. After that I'm proud of the vertical navbarwhich creates a really clean and friendly look for the user. The droppdown effect on them is created with cssanimations because I wanted to mix the things and stay away from the jQueryfor some of those effects(it's not as smooth as it should be). Next one which is most obvious for the user is the slideshow/carousel/sliderwhich I've done with Vanilla JS and has amazing animation for changing slides. As Homepageis the first one you will inspect, I'm happy with the effect of the arrowon the right bottom corner which always scrolls the page with 200pxand is beeing overlapped from the footer. - The pages which get their data from - History stiff slider - The slider was a challenge as well but after my second attempt I created it to work smoothly. The UI on desktop is really friendly and easy to navigate. Mobile device styling is way different than the desktop but still works and offers different experience for the users. - Proud of the project, I know that there is much more to do and lots of the pages are missing but the basic layout of the website is done from scratch. It took me more than a month but it was worth every minute working on it. If you have any questions, comments, tips etc. please contact me.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525627.38/warc/CC-MAIN-20190718104512-20190718130512-00133.warc.gz
CC-MAIN-2019-30
1,859
26
https://forum.arduino.cc/t/vl6180-does-not-seem-to-work-with-the-mkr1000/415091
code
I'm having some trouble making the VL6180 sensor work with the MKR1000 (SparkFun, SEN-12785). I'm using the latest library and example provided: https://github.com/sparkfun/SparkFun_ToF_Range_Finder-VL6180_Arduino_Library The code compiles fine, but after it's uploaded into the board, it makes it disappear from the list of USB devices available. I'm using the Arduino IDE 1.6.11, and I've connected everything correctly: MKR1000-VL6180 5V-VCC GND-GND SCL-SCL SDA-SDA
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00099.warc.gz
CC-MAIN-2021-21
468
5
http://www.astrology-insight.com/horoscopes/cancerhoro.htm
code
Protective and sympathetic June 22 - July 22 These Horoscopes are for Friday 08/23/2019 (Hit refresh or reload if the date is wrong.) You may have difficulty trying to get your mate to understand your position. You could have a change of heart if an old flame waltzes back into your life. You can make wonderful contributions to any organization that you join.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317817.76/warc/CC-MAIN-20190823020039-20190823042039-00238.warc.gz
CC-MAIN-2019-35
360
8
http://reservoirdubs.com/t/allianz-football-league-nfl1-2018/2675?page=34
code
Talking of KD and last decade… Sean C was half-jokingly asked if the ‘05 Tyrone team could live with this Dublin side. Can’t remember his exact answer other than they’d give them a rattle. But got me thinking what a game that would’ve been. A forward line of Peter C, S O’Neill, mulligan, McGuigan, Dooher and Sean C against these Dublin backs…and vice versa. Both mobile and talented units. And hard. Impossible to answer ofcourse. I’ve said before that Dublin are the best side I’ve saw (so did SC last year)…so would give a begrudging nod to Dublin with the bench they have.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811655.65/warc/CC-MAIN-20180218042652-20180218062652-00425.warc.gz
CC-MAIN-2018-09
597
8
https://addons.mozilla.org/pt-BR/firefox/extensions/other/?sort=created
code
Helps take quizzes in the IU canvas system by autocompleting known questions Make Kimai navigable Search the web without tracking your search history or any personally identifiable information This extension will check all your meebook stuff for you! Generate random integer, decimal numbers and binary sequences. Save the numbers in files for any later use. Scroll to next head comment on Hacker News with the click of a button. Load the DokuWiki Toolbox plugin on any DokuWiki installation on the web Simple flag checklist An advanced client for web mapping services that use the WMS protocol Hides completed checklist items on Trello cards This firefox extension enables screen capturing for kore.com Hides the student picture in the Mashov system (web.mashov.info) automatically. It limits the scrolling of facebook page such that only the very recent posts are seen, which allows users to spend less time on facebook. Create Feedback on FeedbackPanda directly from Online Education sites such as VIPKid.com This firefox extension enables screen capturing support in Firefox for different pages. Add spell check functionality to the title field in PCD. After installing this add-on, misspelled words within the PCD title field will be highlighted with a red squiggly line underneath. Allows reading of plain hypertext (htx) documents. Plain hypertext (htx) document format is a simple human-readable hypertext. It is under development. The current spec (in Russian) is located at http://kp4.ru/htx/htx-0.1.htx. Quickly open the compose window of Gmail. Keyboard Shortcut: Alt + C. This extension allows you to share your screen123
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188550.58/warc/CC-MAIN-20170322212948-00507-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,634
20
https://georgedell.com/pick-comps-make-adjustments/?replytocom=83
code
We are taught that comps should be similar, then we adjust. Easy. There are three types of adjustment results. Some are exact, others are approximate, and others are biased but can help us get near the right adjustment number. It is important to understand the basic differences in these three different outcomes. We will consider how these are different, how each occurs, and how to handle each in our appraisals. Future blogs will explore each of these in turn. Each of the three methods recognizes that there may be some error in measurement. If the subject or a comp is mis-measured or mis-reported, the analytics will not fix that. That has to do with data collection and validation. Also we disrespect any notion that a random sample is taken, so there is no approximation error from the random sample to the “population” being studied. For our analysis here, the complete data set is the market segment: all the sales which directly compete with the subject as of the effective date of value. To simplify for now, we will assume only a measure variable, like Sq. Ft. improved area. We will assume the use of only simple regression: One predictor, one predicted variable. An exact, deterministic result comes when the input data is measured, say to the date each sale closed. If we have 15 sales, we know the date of each sale, and we know the sale price. Run a simple linear regression, and you get an exact number. We can call this the price index. If the trend is linear, and the data is correct, there is no variation in the answer. It is deterministic. An estimated result comes when there is random variation in the input variables (predictors), not related (correlated) with any other predictor. An example is capitalization rates of similar properties, of the same or similar age. Variation may come from a number of sources, such as management quality, where the quality of the management is not connected with any other variable. A biased result comes when there is correlation with another input variable. An example is Sq. Ft. living area, and number of rooms in a house. This is called collinearity. Basically, each measures the same thing in a different way. If we do a simple regression on living area on price, the living area may also pick up the value of extra rooms. From experience, or other analysis, we know that the relationship is positive. We want to use the regression coefficient as a traditional adjustment. But we know that the coefficient is a bit too high per Sq. Ft., because it also picks up the room count. Whatever regression coefficient we get is too high. But the question is: Is this information useful? The answer is yes. Of course. We have an indication. We have a number, and we know it is a bit too high. So we are approaching a reliable adjustment. Now we can “adjust the adjustment.” We have a number. We know it is biased. And we know what direction it is biased – too high. Can we approach the “correct” adjustment even closer? Well yes. Can we use simple judgment now to adjust for the bias? Well, yes. Is that subjective? Well yes. But is it far less subjective than simply applying a subjective adjustment amount in the first place. Of course! But there are technical ways to bring this in even closer. Another blog for another time. We will consider the three types of adjustment results in future blogs, as we do in the Stats, Graphs, and Data Science1 class, where we do real exercises with real data, and get real results.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.8/warc/CC-MAIN-20180220224819-20180221004819-00252.warc.gz
CC-MAIN-2018-09
3,495
12
https://news.rodh-it.com/2024/03/13/artificial-intelligence-courses-online-free-mastering-ai-without-breaking-the-bank/
code
Artificial Intelligence Courses Online Free: Mastering AI Without Breaking the Bank: Are you intrigued by artificial intelligence but unsure where to start? Do you want to dive into the world of AI without spending a fortune on courses? Look no further! In this comprehensive guide, we’ll explore the best free online courses that will equip you with the knowledge and skills needed to thrive in the field of artificial intelligence. Table of Contents | Sr | Headings | | — | —————————————————- | | 1 | Understanding Artificial Intelligence | | 2 | Introduction to Machine Learning | | 3 | Exploring Deep Learning Techniques | | 4 | Python Programming for AI | | 5 | Applications of AI in Real Life | | 6 | Ethics and AI: Understanding the Implications | | 7 | Building AI Projects: From Concept to Deployment | | 8 | Resources for Continued Learning | | 9 | FAQs | | 10 | Conclusion | Artificial intelligence, or AI, is a branch of computer science that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. These tasks include speech recognition, decision-making, visual perception, and language translation. Machine learning is a subset of artificial intelligence that enables computers to learn and improve from experience without being explicitly programmed. It’s the driving force behind many AI applications and technologies, including self-driving cars, recommendation systems, and natural language processing. Deep learning is a specialized field of machine learning inspired by the structure and function of the human brain. It involves the use of artificial neural networks to analyze and interpret complex data, leading to breakthroughs in areas such as image recognition, medical diagnosis, and autonomous vehicles. Python is the preferred programming language for many AI developers due to its simplicity, versatility, and vast ecosystem of libraries and tools. Learning Python is essential for anyone interested in pursuing a career in artificial intelligence, as it’s used extensively in AI research and development. Artificial intelligence has countless applications across various industries, including healthcare, finance, manufacturing, and entertainment. From personalized medicine to fraud detection and autonomous robots, AI is revolutionizing the way we live and work. As AI becomes more prevalent in society, it’s crucial to consider the ethical implications of its use. Issues such as bias in algorithms, privacy concerns, and the impact on jobs and society need to be addressed to ensure that AI benefits everyone equitably. Putting your AI knowledge into practice involves more than just theoretical understanding. It requires hands-on experience in designing, implementing, and deploying AI projects. Learning how to work with real-world data and navigate the development lifecycle is essential for success in the field. The field of artificial intelligence is constantly evolving, with new techniques, algorithms, and technologies emerging regularly. To stay ahead in this fast-paced industry, it’s essential to engage in continuous learning through online courses, books, research papers, and community forums. Some reputable platforms offering free AI courses online include Coursera, edX, Udacity, and Stanford Online. While a background in computer science can be beneficial, many introductory AI courses assume no prior knowledge and are designed to be accessible to beginners. While free online AI courses may not offer the same level of depth or personalized support as paid ones, they can still provide valuable knowledge and skills to get started in the field. The duration of free online AI courses varies depending on the platform and the specific course. Some courses can be completed in a few weeks, while others may take several months to finish. Many platforms offering free AI courses offer certificates of completion, which can be a valuable addition to your resume or portfolio. Embarking on a journey to learn artificial intelligence doesn’t have to break the bank. With the abundance of free online courses available, anyone can acquire the knowledge and skills needed to explore this fascinating field. Whether you’re interested in machine learning, deep learning, or ethical considerations in AI, there’s a course out there waiting for you. So why wait? Start your AI education today and unlock a world of possibilities! This article provides a comprehensive guide to free online artificial intelligence courses, covering various topics such as understanding AI, machine learning, deep learning, programming, real-life applications, ethics, project building, and continued learning resources. Additionally, it includes a table of contents, FAQs, and a conclusion to enhance readability and provide valuable information to the readers.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296820065.92/warc/CC-MAIN-20240425000826-20240425030826-00527.warc.gz
CC-MAIN-2024-18
4,896
29
http://renprovey.com/hstore/2012/10/27/hstore-you-just-might-enjoy-shopping-there
code
hstore You Just Might Enjoy Shopping There At work, I've been lucky enough to be developing an entirely event driven application (more on that in the future). I knew it would be an interesting challenge, but I didn't realize how much I'd enjoy it. Being a green field app, we're using a number of technologies and services that have me excited. While it took me a bit of diliberation to decide on, the piece that has really surpised me and expanded my architectural thinking is PostgreSQL's hstore. In hindsight the fact that hstore is allowing us the flexibility to prototype quickly, is less surprising than the fundamental shift in how I've started thinking about storing an retrieving data. If you're interested in checking out hstore, I recommend the following stops: - hstore docs - activerecord-postgres-hstore gem - Railscasts (as usual is excellent) - Heroku hstore example app - You got NoSQL in my Postgres! Using Hstore in Rails The one bit that took me some time to find was about getting hstore running in my Rails test environment. Rails' default schema load doesn't enable the Postgres extension (as happens in rake db:test:prepare). To have hstore enabled you'll need to uncomment or add the following line in config.active_record.schema_format = :sql This will change your schema dumps from Ruby to SQL. If you're unsure about the implications of this change, you can learn more in this Rails Guide.October 27 2012
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00095.warc.gz
CC-MAIN-2018-17
1,432
13
https://community.filemaker.com/thread/150842
code
why do you use a portal instead of a layout with the table you want to address ( I am trying to understand the mechanics)? The portal is only there as a data entry tool for the user to construct their search terms. When they click Search the script goes to the source table of the portal and loops through the records. In case there is a misunderstanding - I'm not looping through the portal rows, I'm going to a layout with the table I want to address and constructing an SQL query. I then perform that query, get a list of IDs back and use that to compare against an existing list of IDs by performing a logical AND and using the resulting ID list as a global key to display the results in another portal. It is not very clear what your PSoS script does exactly, you mention that it does do portal looping but in your original post it sounded like it did. Can you expand a bit on what exactly is in the PSoS script and what parameters you pass to it? I'd didn't mention portal looping - I said that I go to a utility layout to do the looping. The main script is triggered by the user clicking a button. One of the subscripts which, when called via PSOS, will cause the portal records to be deleted is shown here - If [ #Assign ( Get ( ScriptParameter ) ) ] Set Variable [ $searchResults ; Value: ExecuteSQL ( $sqlQuery ; "" ; "¶" ) ] Exit Script [ Result: $searchResults ] (Sidenote: The #Assign statement is from www.filemakerstandards.org. It takes a parameter passed as a name/value pair and assigns the value to a local variable. The variable name is taken from the script title (in this case 'Execute SQL ( sqlQuery ) yields the local variable $sqlQuery) An example SQL statement that gets passed is - SELECT Customers.id FROM Customers WHERE Customers.isDeleted = 0 AND Customers.isActive = 1 It is assigned to a variable $query and is passed as a parameter in the format '# ( "sqlQuery" ; $query )' which is the expected format of the #Assign statement in the PSOS script. As you can see, there is very little code here and nothing at all to do with deleting records. I'm convinced it's a bug as I can use PSOS on any subscript and the same thing happens. Thanks for your help. Can you confirm that the records that disappear from the portal are actually deleted? You can not find them anymore in the child table? I doubt that this is a bug, it is more likely that something else in your script (not the PSoS part) is either killing the relationship or deleting the records. But I could be wrong; if it is a bug then we should be able to replicate it in a simple file that does nothing but the PSoS part, have you tried to confirm it that way? Yes, they are definitely deleted. They don't exist anywhere in the child table - the record count is 0 (zero). When I use Script Debugger the records disappear from the portal immediately after ANY subscript that is called by PSOS which makes me think it's a bug. If it was bad coding it would occur in the same place everytime but it doesn't. If I set one subscript to 'Perform Script' and a later subscript to 'PSOS' the error occurs later in the main script. If I switch so that the first subscript is 'PSOS' then the error occurs after that call. I haven't tried to replicate it yet as I'm not sure exactly how! I'll take a look and see what I can do. Keep in mind that PSoS will first run the OnFirstWindowOpen script if there is one configured. So that sequence could be responsible, especially if it contains any script steps that are not server-compatible. Oh man, are you kidding? I had no idea that PSoS called OnFirstWindowOpen as there's no interface! I clear the query table on launch so without even checking I know that's going to be the issue. You're my hero!!!!!!!! If you're ever in the UK there'll be a nice cold beer waiting for you Just to confirm that a quick If statement in the OnFirstWindowOpen script has fixed the issue. Thanks so much! You're very welcome. Glad it go solved! Wim, I've never heard of PSOS running the on first window open script before and although I don't use it yet I plan too. I've read about it don't remember seeing this. Is it documented? Are you saying it will run the opening script of a file that's already open or on files opened by sub scripts? Yes it is documented. Although perhaps a little obliquely. Note where it states that a PSoS script behaves as a server-side schedule script, and those do run the OnFirstWindow script. Are you saying it will run the opening script of a file that's already open or on files opened by sub scripts? And that is where the most common logical flaw is: by calling PSoS you are in effect creating a completely new user session on the server. That user session basically logs in as you but starts completely anew; it does not know what layout you are on or what found set you are on, it just logs into the database, runs the OnFirstwindowOpen and then your script. It does not know what files you have open in your local session, it does not know what layout you are on or what records you have active. You have to re-establish those in the PSoS session by passing variables to your PSoS script. In addition to all of the above: the fact that you spawn a completely new user session but one that runs on the server in effect makes FMS an "application server" instead of a pure "database server". So you have to be careful that your server can actually handle that extra load. I've seen many posts by developers that say "PSoS is x times faster than running it locally". And it is true, but it may not remain true if you have 10/20/50/100 users doing that all of the same time and your server does not have the processing power to cope with all of that. I am not discouraging the use of PSoS, I'm just saying: consider all aspects; especially the current performance baseline and what adding the load will do to the server and every user's experience. The PSoS script step starts a new user session on the FMS. Its as if a new computer had connected to the DB, which runs the OnFirstWindowOpen script. Then this new session runs whichever script is called in the PSoS step. You must be sure to establish the context and found set within the script and DO NOT rely on anything in the current user's session. This included any variables and changed global Fields. Also, all script steps must be server compatible.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103891.56/warc/CC-MAIN-20170817170613-20170817190613-00310.warc.gz
CC-MAIN-2017-34
6,356
38
http://www.linuxquestions.org/questions/ubuntu-63/new-to-ubuntu-studio-849989-print/
code
New to Ubuntu-Studio I am a long time linux user but new to Ubuntu. I was attracted to U-Studio and put it on my new box. First I installed Ubuntu and tried to use the Aptitude route to Studio - no good: after install I could not get authenticated. I tried this twice - no authentication, no log in. I then got the Studio DVD, this works. The Grub boot which I did not notice in the original Ubuntu gives me several non-working kernels but the generic kernel works and is sufficient, the fact that grub fails to come up almost half the time is not good but that may be a hint at why the Aptitude business failed. I also have a problem with screensaver - once screensaver kicks in I cannot get it to stop and have to reboot, this is not linux as I know it. My instinct is that I need to customize the kernel but would be interested to hear if anyone else has these problems. I am running an Intel i5 on a DH55HC MB. See this for upgrading to Ubuntu Studio from a default Ubuntu: Ubuntu studio uses a real-time kernel. I don't know if this is the reason some kernels don't boot, but you can read about the various kernels in Ubuntu studio here: And see this for getting started with Ubuntu studio: |All times are GMT -5. The time now is 10:20 AM.|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00428-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,245
6
https://www.informationisbeautifulawards.com/showcase/4746-the-guys-grabbed-a-drink-and-decided-to-make-some-noise
code
“The guys grabbed a drink, and decided to make some noise” by Novaya Gazeta Europe The project provides an extensive overview of the Ukrainian conflict before it escalated into full-scale war after the Russian invasion of Ukraine on February 24, 2022 (the date of publication 24 August 2021). The analysis is based on data on the number of ceasefire violations provided by the OSCE Special Monitoring Mission. We gathered more than 2000 daily reports in PDF format from the OSCE website using Python. These PDFs have different structure according to different publishing periods, so we created a programming tool to parse them and convert them into large CSV files. It also required some manual work to prove the recognition quality. By matching the timeline of ceasefire violations with the social and political events and media propaganda, it became clear that at the time the conflict had transformed from real military operations into so-called hybrid war or infowar. And some groups had an interest in maintaining that state of affairs.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816875.61/warc/CC-MAIN-20240414064633-20240414094633-00693.warc.gz
CC-MAIN-2024-18
1,045
4
https://www.shaalaa.com/question-bank-solutions/observe-connections-cells-shown-following-images-solar-photovoltaic-cell_51904
code
Observe the connections of cells shown in the following images. i. Which connection will give maximum potential difference? ii. Give one advantage and one disadvantage of this energy. i.Type of connection of solar cells in ‘A’ is ‘in series’ connection and it will give maximum potential difference. ii.Advantage: Solar energy is an eco-friendly source of energy. It does not cause any pollution. Disadvantage: As it can be produced in presence of sunlight only, it needs to be stored in batteries for use.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00368.warc.gz
CC-MAIN-2021-17
514
6
https://docs.vmware.com/en/VMware-SD-WAN/services/VMware-SD-WAN-Google-Cloud-Platform-Virtual-Edge-Deployment-Guide/GUID-97089BD3-BB0B-4E92-BB28-11DAE28F0AC1.html
code
You can choose to create an Automatic mode or Custom mode Virtual Private Cloud (VPC) network. Automatic mode networks create one subnet in each Google Cloud region automatically when you create the network. For Custom mode VPC networks, you have to create a network and then create subnets that you want within a region. You can create subnets when you create the network or you can add subnets later, but you cannot create instances in a region that has no subnet defined. Ensure you have a Google account and access/login information to the Google Cloud Platform (GCP) Console. - Log on to the GCP Console. - Click VPC Networks. The VPC Networks page appears. - Click Create VPC network. The Create a VPC network page appears. - In the Name textbox, enter a unique name for the VPC network. - Under Subnets, choose Custom or Automatic as the Subnet creation mode. If you choose Custom, then in the New subnet area, specify the following configuration parameters for a subnet: - In the Name textbox, enter a unique name for the subnet. - From the Region drop-down menu, select a region for the subnet. - In the IP address range textbox, enter an IP address range. - To define a secondary IP range for the subnet, click Create secondary IP range. - Private Google access: Choose whether to enable Private Google Access for the subnet when you create it or later by editing it. - Flow logs: Choose whether to enable VPC flow logs for the subnet when you create it or later by editing it. - Click Done. - To add more subnets, click Add subnet and repeat the steps in Step 5. You can also add more subnets to the network after you have created the network. - Choose the Dynamic routing mode for the VPC network. - Click Create. The VPC network and subnet are created.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00663.warc.gz
CC-MAIN-2022-33
1,765
20
http://stackoverflow.com/questions/10954320/iis-caching-and-web-services?answertab=oldest
code
I am hosting my wcf services with http binding under IIS. Lately I noticed that there is going on some caching of some sort. I need to refresh my web service till I am getting the real data, and not that been few minutes ago. Is there a way to disable this sort of caching ? and how to do so ...
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125524.70/warc/CC-MAIN-20160428161525-00123-ip-10-239-7-51.ec2.internal.warc.gz
CC-MAIN-2016-18
295
4
https://bugreports.qt.io/browse/QTBUG-99682
code
Priority: Not Evaluated Resolution: Won't Do Affects Version/s: 5.15.8 Fix Version/s: None In an attempt to modularize building of QtWebEngine 5.15.x, I looked into whether the GN bundled in qtwebengine-chromium could be built separately (I understand that doing so is likely unsupported). I notice various things appear to be currently broken in the gn_unittests target; I am reporting these in case others are interested and/or had expected it to work. What is described in this report does not impact ordinary builds of QtWebEngine 5.15.x, which only build the gn target. - The file gn/src/gn/function_filter_unittest.cc (from backport commit 84164e4) is not in the path specified in executables['gn_unittests']['sources'] of gn/build/gen.py. Presumably the file should be moved to gn/tools/gn/, where the rest of the unit test files reside. - gn/tools/gn/ninja_target_command_util_unittest.cc (from backport commit 677e14d0) uses ESCAPE_SPACE, and so does not compile without backporting GN upstream commit 5624679 (as well as adjusting the signature of EscapeStringToString_Space() so that it accepts a const base::StringPiece& instead of a const std::string_view&). - Multiple files seem to need include header path adjustment (possibly by prepending "tools/"). The affected source files and include headers are: gn/tools/gn/function_filter_unittest.cc: gn/functions.h, gn/test_with_scope.h - gn/tools/gn/qmake_link_writer_unittest.cc does not compile because a pointer to NinjaBinaryTargetWriter writer is being passed to the QMakeLinkWriter constructor, which expects const NinjaCBinaryTargetWriter * i.e. a pointer to a child class of NinjaBinaryTargetWriter (see gn/tools/gn/qmake_link_writer.h). Possible fixes could be to have the unit test file instead declare writer as NinjaCBinaryTargetWriter (and include ninja_c_binary_target_writer.h instead of ninja_binary_target_writer.h), or have the QMakeLinkWriter constructor instead expect const NinjaBinaryTargetWriter * (as currently done on qtwebengine-chromium branch 69-based), but I do not know which approach is correct/preferred.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710941.43/warc/CC-MAIN-20221203212026-20221204002026-00656.warc.gz
CC-MAIN-2022-49
2,097
10
https://community.particle.io/t/shared-variable-between-loop-and-software-timer/37658
code
I am sharing a variable between a software timer and a function called from the loop. I am running into issues where the software timer is modifying the variable in the middle of the loop function and it is causing issues in that function. Any recommendations on how to avoid this problem? What would the correct behaviour of your firmware be? Do you want it to immediately start using the new value of the variable? Another option would be to finish the function and then update the variable. One possible solution is to use the software timer to set a flag instead of changing the variable. The firmware can then check the flag at the appropriate time, and update the variable when you decide it should be updated. Checking the flag could be in the main loop or in the function; it depends on the behaviour that you want to achieve. This really all about thinking through how your program works. Key in any piece of software is planning, what you will find if you are not careful (particularly embedded I seem to find) is that if you are not careful you can end up with very high number of global variables. Whenever you create a variable think about what it is for, what can access it and why as well as the consequences of that access. Wouldn’t it be a similar scenario since the loop and the timer share the flag? As @Viscacha says, it is important to think through how your firmware works. If you use a flag per my earlier suggestion, the flag would only be set by the software timer. Then the flag would be reset at the appropriate time. It is not about letting any part of your firmware set or reset a flag; but carefully controlling how it is used. I think what Viscacha might be suggesting is to look carefully at the design of your firmware. Think carefully about what information is needed and how it is used; that will affect how you use variables. Also look at the order of steps in your firmware; getting the steps in just the right order might simplify things. There might be a solution more elegant than using a flag. In this situation, I think a flag is not going to work. I have a software timer that reads the ADC, performs some math, and stores it in a variable. The main loop needs to publish the latest value of that variable upon a certain event independent of the status of software timer. Would using SINGLE_THREADED_BLOCK be helpful in this scenario where they need to share that variable? @felixgalindo, since Software Timers run in a separate thread, then yes, using SINGLE_THREAD_BLOCK in loop as a way to prevent contention should work. The trick is to keep the blocking code short and fast. Also make sure to declare your shared variable volatile so the compiler doesn’t do any optimization weirdness with it. Another alternative is to copy your timer set variable into another, local, variable at the start of the loop function, so you don’t need to reference it again inside the function. Doing that single assignment in a single-threaded block would be safe and fast. In this way, the value stays stable for purposes of your function and is only updated when you expect it to be. @apt, this is a good suggestion but will depend on the “timeliness” of the change in variable value. As pointed out earlier, a good design looks at the relationship between the Timer and loop() to establish how connected they need to be. This might mean a “loose” coupling via a queue or a tight coupling via a mutex. Using a simple flag that the Timer reads but loop() sets could be used as a simple mutex for example. The mutex would be set by loop() to indicate that data was being used and the Timer would skip modifying that data if the mutex was in “busy” mode. There are plenty of ways to cook this dish! I’ve considered using a flag in other scenarios as sort of a mutex as well but would reading/writing a flag be considered an atomic operation. Was worried that I could write to the flag while in the other thread it was being read. Do you see this as a potential problem or would using the SINGLE_THREAD_BLOCK be the safest way to go? @peekay123 @felixgalindo, the chances of getting a preemptive thread interruption in the middle of an instruction (eg. flag = true) is minute. If you are concerned, specify it as a SINGLE_THREAD_BLOCK() or an ATOMIC_BLOCK(). There was a thread on creating FreeRTOS mutexes where a member successfully created a mutex for the SPI resource: @peekay123 Would the SINGLE_THREAD_BLOCK only be required in the loop or also in the timer callback? In loop() only since the Timer thread is higher priority and will preempt loop() unless blocked.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00645.warc.gz
CC-MAIN-2020-40
4,617
22
https://www.construct.net/en/forum/construct-3/general-discussion-7/arrays-working-differently-153550
code
Okay, this is a bit weird I think. I'm using an array to track quests in my game. Every time the quest gets updated, I use a function to append the text strings and then show that in a journal. All seemed to be working well, but I went to debug one of the function parameters to make sure I was happy with it, when I discovered that things act differently when running in debug mode. I thought it would be best to video this, with the game first being run in normal mode, then in debug mode, to demonstrate. You can see how the journal fills out in the first instance correctly, but when running the game in debug mode, it's clearing the cells. Not sure why this is happening, but I thought it was worth discussing. Tried it on multiple occasions, thinking that maybe I wasn't giving the array time to load in, but it happens like this everytime no matter what. What could be happening in C3 to make the debug mode act differently to the normal mode? Guess I should point out that this isn't really a problem for me atm, as it only happens when I try to fill the wrong cell in the array. Still, I found it interesting.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00053.warc.gz
CC-MAIN-2023-06
1,118
5
https://github.com/jocastaneda
code
Create your own GitHub profile Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 31 million developers.Sign up - Roseville, CA See how many hooks and filters the currently active theme has and what file it is located in. Modified WordPress-i18n tools for translating and creating .pot file Display a link list to your custom post type in a widgetized area. Choose how many and how to order them
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204969.39/warc/CC-MAIN-20190326095131-20190326121131-00392.warc.gz
CC-MAIN-2019-13
459
6
https://math.meta.stackexchange.com/questions/29715/is-generalization-a-good-tag?noredirect=1
code
Recently I've come across the tag "generalization", see here. It's relatively new and there is only one question (Edit: now removed) under it. However, I don't think it's that useful, taking into consideration there isn't any mathematical content associated with it. On the other hand, it's quite vague whether or not a question should be tagged "generalization", maybe when we spot the phrase "in general" or "more generally"? I would like to know the community's opinion on this. Thanks in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00603.warc.gz
CC-MAIN-2023-50
500
1
https://forum.cogsci.nl/discussion/comment/5682/
code
[solved] Inline Python Script Capabilities I feel that OpenSesame primitives don't expose the functionality I am looking for in the experiment I am developing. However, being able to drop into Python should resolve that, providing the following are possible: Time logging at various points within an item: I want to show an item with multiple states, which the user advances through by mouse clicks, where I want to record the time spent on each state. (AFAIK, I can't do this with an OpenSesame loop because the number of states available, each time this item is shown, will be variable.) Is it possible to programatically log a time on-demand? When I have progressed through all the states in said item, I want to then proceed to the next item (which will just be a regular OpenSesame item); again by mouse click. What command is used to advance to the next item? Will I be able to open a SQLite database from the OpenSesame file pool? I feel like experiment.get_fileshould do the job -- and import sqlite3appears to work -- but I'm interested to know if anyone has tried this. I'm sure I'll have more questions (sorry!)...
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00027.warc.gz
CC-MAIN-2021-10
1,125
8
https://free3d.com/zh/3d-model/female-fantasy-character-8629.html
code
High detailed and Rigged Female Character - IK FK Rig - Albedo, Normal, Metallic and Roughnessmap - 4096x4096 png format and lower - PBR Textures - Uv unwraped - High details Materials are created for Blender Cycles. Included file formats: All version are directly exported from Blender. Only Blendfile includes a rig. Texturefiles see Accompanying Product Files
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00248.warc.gz
CC-MAIN-2021-04
362
12
http://braindonor.net/
code
Proudly providing updates on nearly useful information for over a decade. The Braindonor Network incubates technology ideas, recipes, mathematical formulas, and failed plans to take over the world. Sit back and enjoy watching us try to take over the world one night at a time. The BrainDonor Network has a new home in the cloud! I have moved everything from Slicehost to Amazon Web Services. With Rackspace finally pushing through the migration of server images from Slicehost to the Rackspace Cloud, I knew I had the perfect opportunity to switch. Having deployed several client sites to AWS, I also knew I had the comfort level to get everything migrated and set up—so that I could start ignoring it once again. When I first got started with MongoDB in my ASP.NET development I wasn’t able to find as much information as I hoped to on how to get started with MongoDB. All of the core documentation focuses on interacting with the driver and doesn’t give a high-level overview of how to use the driver in an actual project. After getting several projects up and running on MongoDB, I wanted to take a break and provide this much needed guide. In Part One, I discussed managing the scope of $(document).ready(). Next comes the challenge of organizing the contents of $(document).ready() to balance efficiency and maintainability. My team accomplishes this by taking advantage of the Array behaviors of jQuery to structure our code so that function dictates form. Put into practice, these behaviors will structurally organizing the code without imposing rules and processes—because we all know just how effective rules and processes are. Welcome to the new and improved Braindonor Network. As you can see, I have completely updated the WordPress theme of this site. There are still some rough edges—but I liked the new theme so much that I was not willing to wait until everything was completely polished. The new theme is a combination of the barebones Sandbox theme and Twitter’s Bootstrap. Leave it to O’Rielly to give a great summary of NoSQL—and express what I have been telling other developers. NoSQL is not a technology. It’s a different way of looking at large data sets and data-driven applications. I have been busy adding NoSQL to my developer’s utility belt lately by introducing MongoDB into the projects that I am building. I have already referenced my BrainDonor.Mongo project in some of my other posts. I like to think of it as my formal announcement that I am a card-carrying member of the NoSQL movement. For me, the movement is about not putting all my data eggs in one basket. When it comes to development languages and platforms, I am a true polyglot programmer—and I must extend that further into data.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00351-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,746
8
https://gaming.meta.stackexchange.com/questions/12200/rename-steamplay-to-steam-play/12201#12201
code
Stack Exchange network consists of 178 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Can we rename the steamplay tag to steam-play? The official name of this feature is Steam Play, and the hyphenated tag will fit in with the various other Steam feature tags (e.g. steam-workshop). Makes sense to me. I've completed this.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00008.warc.gz
CC-MAIN-2021-49
537
6
https://dans.world/repository/stollerSeqUNetOneDimensional2020/
code
Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling Proc. of the International Joint Conference on Artificial Intelligence - Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI) Computer Science - Machine Learning,Computer Science - Sound,Electrical Engineering and Systems Science - Audio and Speech Processing,Statistics - Machine Learning - Daniel Stoller - Mi Tian - Sebastian Ewert - Simon Dixon Convolutional neural networks (CNNs) with dilated filters such as the Wavenet or the Temporal Convolutional Network (TCN) have shown good results in a variety of sequence modelling tasks. However, efficiently modelling long-term dependencies in these sequences is still challenging. Although the receptive field of these models grows exponentially with the number of layers, computing the convolutions over very long sequences of features in each layer is time and memory-intensive, prohibiting the use of longer receptive fields in practice. To increase efficiency, we make use of the "slow feature" hypothesis stating that many features of interest are slowly varying over time. For this, we use a U-Net architecture that computes features at multiple time-scales and adapt it to our auto-regressive scenario by making convolutions causal. We apply our model ("Seq-U-Net") to a variety of tasks including language and audio generation. In comparison to TCN and Wavenet, our network consistently saves memory and computation time, with speed-ups for training and inference of over 4x in the audio generation experiment in particular, while achieving a comparable performance in all tasks.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509264.96/warc/CC-MAIN-20200606000537-20200606030537-00152.warc.gz
CC-MAIN-2020-24
1,639
8
https://dottech.org/77115/after-25-years-microsoft-has-a-new-corporate-logo/
code
How long ago since Microsoft has changed its corporate logo? According to Jeff Hansen, General Manager of Brand Strategy at Microsoft, it has been a whopping 25 years. Now, among its re-branding attempts with Windows 8, Outlook.com, Skydrive, Office 2013, and more, Microsoft has officially unveiled a new corporate logo. The new logo, which can be seen in the screenshot above, is a mix of modern, clean elements combined with the traditional Windows symbol, an anchor to Microsoft throughout the years. The font being used in this logo is Segoe which, according to Hansen, is the same font Microsoft is using across its other products (such as the ones mentioned above). The different colors in the symbol represent Microsoft’s three main product lines: Windows (blue), Office (orange), and Xbox (green). The yellow, apparently, doesn’t stand for anything significant (yet). The new logo is in stark contrast with the previous one. The previous logo, which debuted in 1987, was (is) an all black, slightly italics “Microsoft”. It had (has) no colors or images. Looking at the 1987 logo and the ones before it, the new Microsoft logo is a fairly huge departure from the part. Personally speaking, I feel the Microsoft logo looks like it could have been created in Microsoft Paint but that is the beauty of it — it is a clean, refreshing design which goes in line with what Microsoft is trying to do with its products. What do you think about the new Microsoft logo? Love it? Hate it? Wish you thought of it yourself then sold it to Microsoft for a handsome payoff? Let us know in the comments below!
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00069.warc.gz
CC-MAIN-2023-06
1,610
4
https://www.eskell.com/products/python-leatherette-tray
code
Python Leatherette Tray This variant is currently sold out Clean lines with versatile neutral tones and sleek faux leather construction make this contemporary accent tray a show-stopping centerpiece. In black and white snakeskin, this chic tray blends beautifully into any space. With a substantial size and elegant shape, it acts as a perfect catchall, organizing container or tabletop accessory. Available in a medium or large size tray. - Medium tray: 15 x 11 x 2 in. - Large tray: 21 x 13 x 2 in. - 100% cruelty-free faux leather
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271763.15/warc/CC-MAIN-20220626161834-20220626191834-00621.warc.gz
CC-MAIN-2022-27
533
6
https://forum.frontrowcrew.com/discussion/7079/comedic-scenes-routine-ideas-for-future-play
code
Comedic Scenes/Routine Ideas for Future Play Today at a Drama Club meeting, we are discussing our Spring Production (and sadly the final production of the year). Because we don't usually have enough time to do a play that already exists (despite how much we want to). At this time of the year, we usually do a performance piece usually consisting of various scenes and sketches one after the other, usually revolving around some sort of theme. The theme this year is comedy. The goal is to get a lot of material from various films and comedic routines (respectively) and adapt it for a high-school audience. I've signed on to be the main writer of the script, a researcher of material that can be potentially used, and adapt it all into one cohesive script in one week from now. Even though my forte is in movies and television, I only know so many funny scenes that can both be funny and make sense out of context. The point is, I need some assistance and I was wondering if one or more of you guys would be interested in giving me a hand or not? If you guys do wish to help me out, either post your ideas here or send me an e-mail (the latter of which is located in my profile) with a scene or skit that you think would be really funny to act out on stage, and I will see if I can fit into the final product. My thanks to you, will be giving you credit in both the final draft and have your name as a contributor in the program (unless you specifically state not to be credited). There are however a few provisos, a few quid-pro-quos. There cannot be any coarse or salty language (if there is a minimal amount, I can just make a few alterations but that's the line). It cannot revolve around disgusting (scatological or similar bodily functions), sexual, or heavily violent content. Last but not least, It has to make sense out of context and be generally humorous. To give you guys an idea of what I'm searching for, I'm currently busy adapting the "Independent Contractor/Star Wars" scene from Clerks. The content doesn't have to be that high-brow though, because anything is fair game. I would be very much obliged if you guys could lend me a hand here.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00583.warc.gz
CC-MAIN-2022-21
2,158
4
https://life-improver.com/rpg/rpg-are-there-any-benefits-to-being-a-small-character/
code
Small characters face a number of disadvantages. They can't use two-handed weapons, and have to use two hands to wield a versatile weapon DDI. Further disadvantages include: - The inability to Grab Large creatures (medium creatures can). - The inability to Bull-Rush Large Creatures (medium creatures can). - A shortened vertical reach when jumping (one-third of the creature's height is added to athletics checks; the smaller the creature, the shorter the reach). - Very limited options for 'Change Self' type rituals and powers, as they specify the new form must be your size category. Are there any benefits to make up for this? I am looking for benefits specifically supported by the rules, rather than anything that relies upon the DM being amenable to particular interpretations. Joe's answers are useful for the latter purpose, but were not what I was looking for clarification on. Put another way – If the penalties applied to small characters were removed altogether, is there anything that would make small characters unbalanced/more favoured compared to medium-sized characters?
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506029.42/warc/CC-MAIN-20230921174008-20230921204008-00065.warc.gz
CC-MAIN-2023-40
1,091
10
https://archive.sap.com/discussions/thread/1955265
code
E-REc: Error in Candidate Data Overview When am opening the candidate 'print preview' - Data overview for Candidate is not opening. we are getting this error only for one candidate in production, rest for all the candidates - data overview is opening fine. SAP print settings aare also fine. Error is: Error when generating the form for the data overview. Any one can share their valuable suggessions.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495367.60/warc/CC-MAIN-20190220170405-20190220192405-00318.warc.gz
CC-MAIN-2019-09
401
4
https://forums.opensuse.org/t/relocate-home-directory/4518
code
I have openSUSE 11 installed and I tried setting up a dual-boot. Unfortunately after the resizing the partitions and making new ones it failed. How can I move my existing /home partition ins openSUSE into the new partition the other system created? I tried copying all of the files from my existing /home partition, and changing the /etc/fstab to point /home to this new partition but I was unable to log in. It would go through the motions, but then kick me back to the login screen.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510810.46/warc/CC-MAIN-20231001073649-20231001103649-00166.warc.gz
CC-MAIN-2023-40
484
4
https://www.experts-exchange.com/questions/23895138/Can't-get-ADO-Recordset-to-work.html
code
Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily. Dim rstTimeEst As New ADODB.Record Dim strQuery As String Dim dbBat1 As New Command strQuery = "DELETE * FROM Commitment_Time_Est_Data" CurrentDb.Execute strQuery dbBat1.ActiveConnection = CurrentProject.Connection dbBat1.CommandText = "SELECT * FROM Commitment_Time_Est_Data" rstTimeEst.Open dbBat1, , adOpenForwardOnly, adLockOptimistic rstTimeEst!Commitment_ID = !Commitment_ID rstTimeEst.Update |MS Access 2016 Split Form||5||31| |Concatination in Sample DB||14||49| |Ms Access Product Inventory Stock Movement Form Layout Suggestions Barcode Scanner||46||66| |Control Updating Records using a Form - MS Access||10||13| Join the community of 500,000 technology professionals and ask your questions. Connect with top rated Experts 18 Experts available now in Live!
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720238.63/warc/CC-MAIN-20161020183840-00154-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
958
9
https://freebooksdownloads.net/statistical-learning-and-sequential-prediction/
code
- Category: Computer - Author: Alexander Rakhlin, Karthik Sridharan - File type: PDF (261 pages) Read and download free eBook intituled Statistical Learning and Sequential Prediction in format PDF (261 pages) created by Alexander Rakhlin, Karthik Sridharan. This book focuses on theoretical aspects of Statistical Learning and Sequential Prediction. Until recently, these two subjects have been treated separately within the learning community. It follows a unified approach to analyzing learning in both scenarios. To make this happen, it brings together ideas from probability and statistics, game theory, algorithms, and optimization. It is this blend of ideas that makes the subject interesting for us, and authors hope to convey the excitement. The authors shall try to make the course as self-contained as possible, and pointers to additional readings will be provided whenever necessary. The target audience is graduate students with a solid background in probability and linear algebra. Why should one care about machine learning? Many tasks that we would like computers to perform cannot be hard-coded. The programs have to adapt. The goal then is to encode, for a particular application, as much of the domain-specific knowledge as needed, and leave enough flexibility for the system to improve upon observing data. Read and Download Links:
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297292879.97/warc/CC-MAIN-20240425094819-20240425124819-00638.warc.gz
CC-MAIN-2024-18
1,350
8
http://patricklogan.blogspot.com/2007_10_07_archive.html
code
Stefan Tilkov and Dare Obasanjo debate the number of big sites that would benefit from giving up on the relational database as a centerpiece of their systems. The parameter used in the discussion is scalability. A number of the lessons learned from these big web sites could apply to smaller data center designs as well. The biggest problem I've seen in the typical data center, and from comparing notes, seems fairly common, is the ability evolve components relatively independently. Relational databases can be in the mix and still evolve, just as more supposedly "loosely coupled" mechanisms can actually tie components tightly together. For example some systems may use asynchronous messaging to decouple with respect to time, but then pass to each other data that exposes implementation details. And so these systems are coupled to each other with respect to maintenance and enhancements: changing the implementation of component C demands corresponding changes to every other component that consumes component C's implementation-specific messages. But the lessons are worth following for more reasons than just scalability.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.60/warc/CC-MAIN-20170423031202-00303-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,129
4
https://github.com/mkropat/luks-mount
code
Teach mount(8) to mount LUKS containers directly Building on the technique pioneered by Tobias Kienzler, luks-mount extends mount command so it can mount LUKS volumes directly, handling all the cryptsetup work itself. In other words, this: cryptsetup luksOpen /dev/mapper/somevg-somevol somevol mount /dev/mapper/somevol /some/mountpoint Becomes simplified to: mount /dev/mapper/somevg-somevol /some/mountpoint Once you've added an entry to UUID=... /some/mountpoint crypto_LUKS defaults,noauto 0 1 luks-mount is meant to be complementary to If you want to unlock an encrypted volume at boot and have it stay unlocked, crypttab handle that for you. On the other hand, if you want to mount an encrypted volume on demand — and unmount it when you're done with it so it stays safe — that's where luks-mount can help. By default, 15 minutes after you mount a LUKS volume, luks-mount will begin to monitor the mount point and wait for you to finish using it. As soon as it's no longer in use, luks-mount will unmount the encrypted volume and automatically close it for you. sudo add-apt-repository ppa:mkropat/ppa sudo apt-get update sudo apt-get install luks-mount git clone https://github.com/mkropat/luks-mount.git cd luks-mount make deb sudo dpkg -i luks-mount*all.deb sudo apt-get install -f # if there were missing dependencies git clone https://github.com/mkropat/luks-mount.git cd luks-mount make && sudo make install
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510941.58/warc/CC-MAIN-20231001205332-20231001235332-00577.warc.gz
CC-MAIN-2023-40
1,424
18
http://www.geekstogo.com/forum/topic/198436-help-me-in-finding-samples-of-dissertation/
code
help me in finding samples of dissertation Posted 16 May 2008 - 10:35 PM Posted 17 May 2008 - 05:15 AM Posted 20 February 2009 - 01:12 AM I am doing my MBA from Ireland in international business. I have worked with construction industry for couple of years including a stint in gulf area and I needed to submit my dissertation this semester. I have a special interest in business development and wanted some suggestions for any good topic specifically focusing the Ireland industry. I was browsing through the website when I found this dissertation help website providing dissertation writing help. I didn’t know that they also offer help with dissertation writing along with FREE dissertation topics. Anyway, here is the list of titles they sent me after my request: 4. The economic impact, the Ireland health care business creates by hiring nurses from second world countries to help provide relief to the current nursing shortage here. 5. Business and Financial Analysis of a domestic business in Ireland expanding towards International Trade. 6. Impact of IT development in FMCG sector My supervisor really liked the first topic and now I am writing a dissertation on it. The free dissertation topics service really helped me in getting off with my dissertation writing quickly. I would like to know other people experiences on that… Edited by BHowett, 20 February 2009 - 07:40 AM. Posted 20 February 2009 - 07:43 AM 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00215.warc.gz
CC-MAIN-2023-06
1,496
13
https://manybooks.net/titles/aldricht2335523355-8.html
code
The Little Violinist Ah, that little violin!--a cherished relic now. Perhaps it plays soft, plaintive airs all by itself, in the place where it is kept, missing the touch of the baby fingers which used to waken it into life! End of Project Gutenberg's The Little Violinist, by Thomas Bailey Aldrich *** END OF THIS PROJECT GUTENBERG EBOOK THE LITTLE VIOLINIST *** ***** This file should be named 23355-8.txt or 23355-8.zip ***** This and all associated files of various formats will be found in: http://www.gutenberg.org/2/3/3/5/23355/ Produced by David Widger Updated editions will replace the previous one--the old editions will be renamed. Creating the works from public domain print editions means that no one owns a United States copyright in these works, so the Foundation (and you!) can copy and distribute it in the United States without permission and witho
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00480.warc.gz
CC-MAIN-2022-49
866
8
http://www.geekstogo.com/forum/topic/165249-screen-blanks-system-freezes/
code
about a month ago while defraging, system suddenly rebooted. got thru welcome and "music", but desktop would not come up. we were able to get it working again, reloaded all the programs, and things seemed to be okay - although shortly after, while working in Word, problem began to occur - then PowerPoint, Paint Shop Pro, and once or twice on several others. have updated the mouse drivers, removed and reset the graphics card and determined that its drivers are latest. don't know if this is hardware problem or what, but since we are using XP OS, thought i should start here. anyone encountered this problem before and does anyone have a solution? is something failing and need to be replaced? thanx for any help!
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00222.warc.gz
CC-MAIN-2021-31
716
5
https://support.waspbarcode.com/kb/articles/mobileasset-i-moved-the-mobile-asset-database-to-new-machine-and-now-i-need-to-update-the-work
code
*These steps are for Mobile Asset v4 or v5. For Mobile Asset v6, please use this article: MobileAsset: Database Server's machine name has changed, now Mobile Asset will not open On the workstations, go into the registry editor and browse to folder HKEY_LOCAL_MACHINE\SOFTWARE\WaspTechnologies\MobileAsset\Options There are 2 keys that need to be modified. The Server key will read <servername>\WaspDB or <servername>\waspdbexpress. The License Server key will be just the server name. After changing those two keys, close the registry editor and go to the Control Panel. Go into Administrative Tools, then Data Sources (ODBC). In that window, go to the System DSN tab and double click on MobileAsset. On the bottom line of that screen is the server selection. Point it to the new server name\waspdb or name\waspdbexpress, to match the registry. Click next through the rest of the screens, then Finish. Click on Test Data Source and make sure the window that comes up says TESTS COMPLETED SUCCESSFULLY! Click OK on those windows and close the Control Panel. It should now be all set up properly.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816024.45/warc/CC-MAIN-20240412132154-20240412162154-00867.warc.gz
CC-MAIN-2024-18
1,094
5
https://communities.sas.com/t5/SAS-Data-Mining-and-Machine/Size-of-data-causing-problems/td-p/194526?nobounce
code
08-07-2015 10:39 AM I've recently been trying to run Enterprise Miner on some data to create some models. The data I am trying to work with will hopefully be combined with other data in the future when I create the final model(s). However, I've been attempting to run the default cluster node on the data set, and it's ran for more than a day so far and doesn't show any sign of finishing soon. The data set is ~22.7 million rows, is this time expected for a data set of this size? Idealy this isn't all the data either, but currently the process is currently prohibitively long for my use, nevermind using the complete data set. On that note I took a subset of the full data (~117k rows) to play around with, creating some decision trees. I tried to view the actual tree that the model comparison selected as the best, but the tree seems to be too large for EM to handle well, as trying to view any form of the results takes 10+ min to update. Is it possible to export the results as text, pdf or picture to view in another program? 08-07-2015 12:01 PM I'd consider contacting tech support. SAS should handle that size of data in my opinion. I haven't tried EM on anything that large yet but will be soon, so hopefully it doesn't have those issues. 08-07-2015 12:12 PM High Performance Data Mining is the optimized way of analyzing 22.7 million observations. Enterprise Miner 12.3, 13.1, 13.2, and 14.1 have specific nodes that run hpprocs. For example HPCluster node uses PROC HPCLUS, which can also take advantage of a grid, distributed environment. Touch base with Tech Support to confirm that your system is well suited to handle your large data sets. In the meantime, you can see a static summary of your tree if you connect a Reporter node to your flow. By default it generates a PDF with the relevant results from your diagram flow. I hope this helps! 08-07-2015 03:03 PM When you say HPCluster node, are you referring to a node separate to the Cluster node? Or is it the Cluster node with specific settings for HPDM? How do I use or access HPDM nodes? I haven't seen any reference to them in EM or elsewhere, are they an addon? Regarding the reporter node, it encounters an error (sys error 20002) whenever I try to generate the report. Any idea on the cause of this? Thanks for your help! @Reeza I'll let you know what I find out regarding the datasize @Jaap and @MiguelMaldonado I apologize, I missed that the HP nodes are for 12.3 and higher. I'm running 12.1 unfortunately, so I guess I'm unable to make use of them. Are the normal nodes still suited to handling datasets of this size? 08-12-2015 08:29 AM Would you be able to provide any insight into expected processing time for datasets of 22.7 million rows with non HP nodes? As well, I've managed to get the reporter node to create PDF's now (not sure how, just started working) but it only displays a small segment of the tree. Do you have any ideas on how to fix this? 12-09-2015 09:57 AM I'd be happy to run something specific and report on my timing. For example if you tell me run X nodes on Y data set. The runtime for nodes will vary according to the data. It won't be an apples to apples comparison. The machine where I have SAS installed is pretty powerful. If your original problem is that you cannot visualize your results, you should seriously consider touching base with Tech Support. Use one of this forms: http://support.sas.com/ctx/supportform/createForm . They will get you up and running in no time! Let me know if I can help! 08-07-2015 03:08 PM What's New in SAS(R) 9.4 Eminer 12.3 a licensing change: "All of the high-performance data mining nodes are now available (at no additional licensing fee) for threaded parallel processing on your existing SAS Enterprise Miner desktop or server. High-performance k-means clustering and decision tree nodes have been added to SAS High-Performance Data Mining." 08-08-2015 02:33 AM That note of a license change is a sneaky one. As miner is having less value not being up to date that way my suggestion is talk to SAS sales./ Account Manager. You have the right for a free update SAS 9.4 stat 14.1 with all being included. Those upgrades are mostly problematic as of the way SAS has implemented the SAS system and not being aligned to common SDLCM policies (mount points, rpm , isolated data/code) causing a lot of ICT headaches. In my opinion you could ask a 12.1 HP license for free so you could continue you work. Upgrading 99.4) should be planned but can be asking a lot of time/budget for that in getting an aligned installation. 08-10-2015 10:16 AM Unfortunately due to a combination of being a summer student and the IT red tape of where I'm working, I doubt I'll be able to get the upgrade sorted in time. Do you know what dataset size I should attempt to stay within? 08-10-2015 02:55 PM Sorry that kind of information on sizing is difficult. There is something as a trade offs with sizing/capacity and processing time. May be MiguelMaldonado? Most of it is gone behind the node curtains.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00602.warc.gz
CC-MAIN-2018-05
5,027
42
http://tomatousb.org/forum/t-1337087/proper-set-up-of-openvpn-on-a-tomato-shibby-fw
code
My goal is to set up a secure, encrypted VPN connection between a client (a Windows laptop connected to an unsecure public WiFi) and a server (my home router) so that I can access the internet securely throught my ISP. I want all the traffic and services to go trhough the VPN tunnel (for maximum security), I don't need access to my home LAN, just to the internet. What I did so far: 1. I have installed and set the latest Tomato (Shibby) firmware on my router (Linksys E1200). 2. I have also installed the latest OpenVPN release in Windows and set it to always run as administrator. 3. I followed the HowToGeek.com tutorial "Connect to Your Home Network From Anywhere with OpenVPN and Tomato" and set everything up but it didn't work the way I desired and I had to make quite some changes to finally make it work properly. 4. Now my VPN connection works but some things seem off: if I put a check in the box at "Direct Clients to redirect Internet traffic" in the Tomato VPN Server setup my client can connect but can not access the internet. Is that normal? I think I did something wrong. 5. The Tomato VPN server only has 4 entries for the keys so I can't use the server.opvn (server.conf) to fine-tune the server part of the VPN connection (and so I can't enter the push "redirect-gateway def1" to the server). I only have the "redirect-gateway def1" line in the client config file but it seems to make all the traffic go through the tunnel, as wanted. 6. There's anoter problem with Diffie Hellman feature. If I include the line "dh dh1024.pem" in the client configuration file OpenVPN won't connect. I have to delete the DH line in order to be able to connect (even though I have pasted the DH file into it's field on the Tomato VPN server). I don't know why that is so and would like to make it work properly? 7. This is how the basic part of my Tomato VPN server setup looks like, should I change something to make it more secure or is it allright for my intended use (described as my goal)? Also if I change the protocol from UDP to TCP it won't connect, is that usual? This is my current client config file that seems to work: remote x.x.x.x 1194 # x.x.x.x represents the public IP on the server part. redirect-gateway def1 # Without this it won't direct everything through the tunnel. - dh dh1024.pem - As said, if I remove the hash OpenVPN won't connect. To reach my goal, what should I change? I will go back to point 3. and do it all over so that I learn to set it up properly, clean and efficient. What should I change in the Tomato VPN server setup? What in the client config file? And what when making the keys and certificates and other things?
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00625-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,663
14
https://www.holo.mg/stream/office-impart-sandbox-mode/
code
Office Impart Presents Digital Art in “Sandbox Mode” Berlin’s Office Impart opens “Sandbox Mode,” a group exhibition that draws parallels between free-form gameplay and digital art. Impart teamed up with JPG’s María Paula Fernández and curator Stina Gustafsson to bring together new and recent code-based works by Mitchell F. Chan, Stine Deja, Andreas Gysin, Sara Ludy, and others that emerged from radical experimentation. Ludy’s new AI video series Metamimics (2023), for example, conjures crazed carnival scenes from deep within the machine. Metadata:/ / /
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506029.42/warc/CC-MAIN-20230921174008-20230921204008-00622.warc.gz
CC-MAIN-2023-40
575
3
https://answers.sap.com/questions/7898283/delivery-creation-problem---material-xxxx-not-defi.html
code
Sale order created and delivery created for a particular Material / Sale orgainstion / Customer combination. PGI was not done. But before that by mistake the line item of the delivery was deleted and saved. Only the line item was deleted and the delivery note is still accessible with VL02N. It shows now no line item at all. Now when we try to input the material number with the quantity we are getting the below error : " Material XXXX not defined for sales org. XXXX distr.chann. lang. EN " Message no. VL009. Kindly how to input the material or how to proceed futher in this scenario. Thanks in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00522.warc.gz
CC-MAIN-2023-06
607
7
http://www.cmap.polytechnique.fr/~allaire/ff3dpages/ff3doverview.html
code
We propose several FreeFem++ routines which allow the users to optimize the thickness, the geometry or the topology of elastic structures. All examples are programmed in three space dimensions. These routines have been written by G. Allaire, A. Kelly. Warning : Although they have been written and tested with great care, these FreeFem++ programs come with absolutely no warranty. Their authors decline any responsability linked to their use. What is shape optimization ? In the context of solid mechanics it is also called structural optimization. It is the mathematical theory which makes possible the ``automatic'' optimization of mechanical structures. By ``automatic'' it is meant that these methods and algorithms can be implemented on a computer which can analyse and improve the designs of numerous successive configurations without any help from the engineer or designer. For more details on this topic we refer to the book "Conception optimale de structures" quoted below.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998724.57/warc/CC-MAIN-20190618123355-20190618145355-00030.warc.gz
CC-MAIN-2019-26
982
9
http://stackoverflow.com/questions/9464546/designing-a-social-analytics-dashboard-with-silverlight?answertab=votes
code
I've got this project going, to implement a social media dashboard. It has two parts to it: The first part allows you to keep track of multiple social accounts, something like HootSuite. The second, and more complicated one, is social media analytics. Implementing the first part isn't too tough. What is tough, is the analytics and reporting module. I've used other apps, like HootSuite and SMA, but for some reason, I never found their analytics too intuitive. But now that I sit down and think of implementing this myself, as an internal project for my company's CRM division, I can't figure out how I'm going to go about this. I have a few ideas though - and it would be great if you guys could tell me if there's some better way to do it. 1) Write code to pull in analytics data, from different networks. So, there will be a different module for accessing data from Facebook Insights, a different module taking care of accessing tweets, and so on. The problem with this, is that obviously, there's a lot of boilerplate code to write, and a lot if time's needed - which I'm rather short on. 2) Microsoft's launched this as public. As the page says, "Microsoft Codename "Social Analytics" is an experimental cloud service. It’s aimed at developers who want to integrate social web information into business applications." Sounds just like what I need. But there's no guarantee that this will become a full-time service, so I'm not sure if using it is the best idea. Apart from that, as of now, I'm pretty sure that I'll be using Silverlight 5 for development. Your views are greatly appreciated!
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131303523.19/warc/CC-MAIN-20150323172143-00191-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
1,600
7
https://colony.io/missions/powertobuild-mission-1
code
- Most of us are familiar with the likes of Uber, AirBnB, JustPark and Rover. - But while these and other sharing economy firms are both numerous and diverse, they are all dependent on a trusted, centralized platform. - How would you reimagine a sharing economy business in the context of a DAO? - Pitch us with a concept that reinvents any existing (Web2) sharing economy business as a DAO. - For example, what would Uber or AirBnB look like as a DAO? - How would other sharing economy businesses work as a DAO? - Pick one Web2 company and develop your DAO idea in the form of a pitch deck, lightpaper (text document) or infographic. 1. The Vision and Value Proposition - A very brief overview of your business and the value that you provide to the market. - Keep it short and simple. A few sentences max. 2. The problem you’re solving - What is broken about the current Web2 equivalent? - Why is this an important problem to solve? 3. Target market and opportunity - What is the total market size and how do you position your solution in the market? - Tell the story about the scope and scale of the problem you are solving. 4. Your DAO solution - Describe your sharing economy service in the context of a DAO. - How does it work? - What is the business model? - How is ownership and membership defined? - What role does the DAO’s token play? - What are the benefits of operating as a DAO? - Include any other details which make your solution unique and innovative. - This is not a proper pitch to raise venture capital, so don’t include a roadmap, marketing strategy, team or any financials. - Don’t get technical, even a 10 year old should be able to understand it. - Blow us away with your creativity, communication and simplicity. Winner 1 = 2,500 CLNY Winner 2 = 2,000 CLNY Winner 3 = 1,500 CLNY Winner 4 = 1,000 CLNY Winner 5 = 500 CLNY - Only one entry per person per challenge, so make it a good one! - The Colony team will choose up to 5 exceptional submissions only. Your submission may be in the form of a pitch deck, one-page text document or a detailed infographic. Step 1 - Post your submission on Twitter, tag @joincolony and include the hashtag #powertobuild in your tweet. Step 2 - Before 12pm UTC on 27th July, post a link to your Twitter post in the relevant Discord thread in the #🚀colony-missions channel. Do not post your submission directly in the Discord thread, only post your tweet link. 27th July 2022 @ 12:00pm UTC
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00372.warc.gz
CC-MAIN-2023-14
2,455
39
http://charlieomaem.bluxeblog.com/3643227/new-step-by-step-map-for-python-project-help
code
Given that we realize distributions for ApplicantIncome and LoanIncome, let's realize categorical variables in more aspects. This chapter is quite broad and you should take advantage of examining the chapter in the ebook Besides observing the lectures to help all of it sink in. You might want to return and re-observe these lectures Once you have funished a couple of extra chapters. include a remark Even though other critiques have downrated this program for currently being tricky along with the assignments diverging from the lectures, I am offering this a five precisely for that reason. Was this evaluate helpful for you? Indeed How do these MOOCs or no cost on the internet courses do the job? MOOCs are created for a web based viewers, instructing generally by means of shorter (5-twenty min.) pre recorded video clip lectures, that you observe on weekly routine when handy to suit your needs. They even have scholar dialogue message boards, homework/assignments, and on the net quizzes or examinations. Best Rated Classes Pandas for structured information functions and manipulations. It truly is extensively utilized for details munging and preparation. Pandas were included somewhat lately to Python and have already been instrumental in boosting Python’s utilization in facts scientist Local community. Characteristic engineering plays a crucial function in creating predictive styles. In the above mentioned circumstance, we have not executed Read Full Article variable selection. We also can select finest parameters by making use of grid look for fantastic tuning method. The movies give an outline of pandas, python and numpy. A lot of the functionalities are stated which happens to be accompanied by a notebook of sample codes to help. The assignments are another ballgame. The 7 days two's assignment is pretty according to what's taught from the study course for that 7 days, even though a small amount of research was needed from Stackoverflow and Pandas documentation. Python. There are numerous parts for example range of libraries for statistical Examination, in which R wins above Python but Python is catching up pretty fast. With acceptance of big knowledge and facts science, Python is now very first programming language of knowledge researchers. BeautifulSoup for scrapping World-wide-web. It is actually inferior to Scrapy as it's going to extract facts from just just one webpage inside a operate. There isn't any very clear winner but I suppose the bottom line is the fact you should target Finding out Python like a language. Shifting involving variations ought to just be described as a make any difference of time. Remain tuned for a committed posting on Python 2.x vs three.x inside the close to future! The Python interpreter sees this at module load time and decides (effectively so) that the worldwide scope's Var1 shouldn't be made use of inside the community scope, which leads to a challenge when you attempt to reference the variable just before it really is locally assigned. Supply and binary executables are signed by the release supervisor making use of their OpenPGP key. The discharge professionals and binary builders because Python two.3 are actually: Anthony Baxter (key id: 6A45C816) Stackless Python - An enhanced Edition of your Python programming language which enables programmers to enjoy the advantages of thread-centered programming without the functionality and complexity problems connected to common threads.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825098.68/warc/CC-MAIN-20181213193633-20181213215133-00338.warc.gz
CC-MAIN-2018-51
3,475
14
http://www.deegree.org/how-to-add-article
code
Many organisations successfully use deegree as web services. This portfolio "deegree in action" shows a number of interesting projects in which deegree plays a significant role. The deegree community is warmly invited to add projects to this page. - Login (or create an account first), - click on the menu item "Create Article" on the right; - The article will be published inmediately, however the website moderator will have a look at it just for keeping things organized.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189667.42/warc/CC-MAIN-20170322212949-00551-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
474
4
https://newviews.com/forums/reply/re-report-printing/
code
Q. Is there a way to Append a printed report in the same fashion as was availabable in NV1? A. No, but this is something that could be added. The only caveat is that a separate sheet would be required as the column layout of the appended report may be totally different from the 1st, so the results would be messy. Q. Is it possible to specify which sheet on an existing file the report will be printed to? A. No, but anything is possible down the road. Q. I am trying to figure out a way to preserve all the formatting each time I send a report to a file. A. The next build will include the option to specify a ‘template’ for printing reports (all tables, actually.) The template will (of course) not be able to control the column contents, but it will allow you to specify the fonts for the title rows, column titles, and the body of the report. And it will give you control over H/V centering and page margins.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00311.warc.gz
CC-MAIN-2022-33
917
6
https://guan.dk/software-is-wrong
code
On the latest episode of Build and Analyze, Marco Arment tries to reconcile his belief that copyright infringement is always wrong and never a legitimate form of protest—we should simply stop consuming the works in question—with his statement that almost all software developers will inevitably and unavoidably practice and infringe a software patent. He does this by saying that developers will inadvertently infringe a patent that the developer didn’t even know about, but you almost never accidentally copy a song or movie or tv episode. I don’t think this distinction gets him very far. If I randomly copy a big chunk of a friend’s early jazz collection and start playing it on shuffle, is that not just as wrong? True, some of the songs could be published under a Creative Commons license or even in the public domain. I don’t know for a fact that my copying any specific song is infringement, especially if I’ve never heard these songs before and don’t know when they were published, or even who the author is. But I have almost certainly infringed someone’s copyright. You never accidentally write a piece of software and publish it. Xcode doesn’t type up code by itself (though that would be an awesome feature) and Apple does not steal binaries from your computer and upload them to iTunes Connect without your permission. If you know that by publishing your app you have practiced someone’s patent, that’s also wrong even if you don’t know which patent it is. If the probability is high enough that you are infringing a patent, it shouldn’t matter very much that you don’t know which patent it is. I wouldn’t necessarily conclude that copyright infringement is ever the right thing to do, or not wrong, but it follows naturally from Marco’s position on copyright infringement that publishing software in the current patent regime is wrong. So Marco should stop doing that. PS: Don’t stop selling Instapaper, I love that app.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866926.96/warc/CC-MAIN-20180624083011-20180624103011-00459.warc.gz
CC-MAIN-2018-26
1,970
5
https://www.mail-archive.com/[email protected]/msg287748.html
code
STINNER Victor added the comment: "Don't want to add too much noise, but this issue also affects the manylinux project build compiler (gcc 4.8.2)." Can you elaborate on these issues? Are you getting the same errors than the error described in the initial message with GCC 4.2? If not, you may open a new issue to track compilation issues of Python 3.6 on GCC 4.8. Python tracker <[email protected]> Python-bugs-list mailing list
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424909.50/warc/CC-MAIN-20170724182233-20170724202233-00479.warc.gz
CC-MAIN-2017-30
433
8
https://falloutmods.fandom.com/wiki/Packaging_your_mod_into_a_DAT_file
code
This tutorial will cover how to package your mod from a 'data' folder into a Dat file. One of the main advantages of putting your mod into a dat file is that you don't have to worry about setting the *.pro files as read only and it is also a whole lot tidier. In the 'Source Files Directory' browse to your mod's data folder. Then in the 'Destination Dat Filename and Patch' enter the location and dat filename that you want. Now just click the 'Build Dat File' button, wait a few moments for the dat file to build and your finished. You now have your mod compressed into an easy to distribute dat file.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00352.warc.gz
CC-MAIN-2022-33
603
2
http://www.efinancialcareers.co.za/jobs-Hong_Kong-Hong_Kong-Senior_Linux_Administrator__DevOps_Engineer__Linux__Information_Technology__Quantitative_Trading__Hong_Kong.id02013237
code
Senior Linux Administrator / DevOps Engineer / Linux / Information Technology / Quantitative Trading / Hong Kong - exceptional and flexible - Hong Kong - Permanent, Full time - BAH Partners - 17 Aug 17 Senior Linux Administrator / DevOps Engineer / Linux / Information Technology / Quantitative Trading / Hong Kong s My client is a well-established and dynamic quant trading firm that is highly profitable and a market leader in the quant trading space. They are now seeking to hire a senior Linux Administrator / Dev Ops Engineer to join their head office in Hong Kong. As a Systems Engineer / DevOps Engineer, you will have the opportunity to work with the latest / most modern tools and concepts in the Linux / Open Source space, with a heavy focus on automation. - Linux administration - DevOps development Ideal candidate for the role: - Hands-on Linux engineer who is excited to build out well designed systems/applications - Strong scripting skills - Solid programming skills in one of the programming languages - Passionate about new technology - Excellent communication skills in English (you will work in a multicultural team) - Truly innovator who can think outside of the box - Eager to learn and improve - Extremely profitable company - Extremely competitive compensation - Fun team - Exposure to cutting edge technologies and methodologies - Dress down culture If you are interested in this position, or would like to explore other opportunities within enterprise information technology, please send your resume / contact details to Noel Lau at [email protected] or call (852) 2544 4477 / (852) 9834 2181 for a confidential discussion. BAH Partners is a top supplier of the best finance technology talent for most of Hong Kong's top-tier financial institutions.. For more opportunities, please visit our website at http://bahpartners.com/live-roles or look at our other jobs listed here at http://www.efinancialcareers.hk/jobs-BAH_Partners.br00000340
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102891.30/warc/CC-MAIN-20170817032523-20170817052523-00191.warc.gz
CC-MAIN-2017-34
1,969
28
http://web.cs.wpi.edu/~matt/courses/cs563/talks/powwie/p2/decimate.htm
code
CS563 - Advanced Topics in Computer Graphics March 18, 1997 This presentation deals with the topic of generalized unstructured decimation. First decimation will be defined and described, followed by decimation methods for several different types of images. The performance of the decimation process will be compared. The material covered in this presentation was originally motivated by the need for efficient and accurate decimation of volume tessellations (unstructured tetrahedrizations). Pre-existing surface-based decimation schemes didn't generalize to volumes, so a new technique was developed, which allowed local, dynamic removal of vertices from an unstructured tetrahedrization while preserving its initial tessellation topology and boundary geometry. Before proceeding any further, it is important to make sure that we understand the meaning of some of the terms which will appear in various places. This presentation follows the format laid out in the paper "Generalized Unstructured Decimation" by Kevin J. Renze and James H. Oliver. The images which are incorporated in this presentation were taken from that paper. Some of the concepts which were difficult to understand, and in those cases where they were fundamental text may have been copied from that paper. Likewise, the algorithms which were presented by Renze and Oliver were copied directly. The term decimation is used to describe the process of removing entities, such as polygons, from a geomatric representation. The goal of decimation is to significantly reduce the number of primatives required to accurately model the problem of interest, and to do so intelligently. Unstructured decimation algorithms came about primarily because of research into surface reconstruction. There are a number of areas in which decimation can prove useful. Besides surface reconstruction, decimation may be useful in scientific and engineering applications. Decimation of polygonal surface models is especially useful in synthetic environment applications where large, complex models must be rendered at interactive frame rates (15 to 30 frames per second). Since models for synthetic environments are usually generated for other applications, such as CAD and medical imaging, they are not well suited for interactive display, since they typically are very large. The basis for the algorithm is a unique and general method to classify a triangle with respect to a non-convex polygon. In other words, to determine whether the triangle lies inside or outside of the polygon. The decimation algorithm produced can be applied to both surface and volume tessellations, and is efficient and robust because it does not use floating-point classification calculations. Surface decimation algorithms usually use either vertices, edges, or faces as the primative element to be removed. There are a number of different algorithms which can be used for surface decimation. None of them, however, apply to volume decimation. The basic concepts in two- and three-dimensional surface literature are extended to apply to unstructured volume applications. The general decimation algorithm has several relatively simple steps. They are: For 2D problems, such as planes or general surfaces, there are a number of algorithms which can be used to compute the new triangulation. In the case where only convex polygons are used, the problem is very easy to solve. Using an algorithm for triangulating star-shaped polygons, or using a constrained triangulation method such as a greedy triangulation algorithm, would be enough to solve the 2D tessellation problem. None of these algorithms seem to work in more than two dimensions, however. As Bern and Eppstein stated in their paper Mesh Generation and Optimal Triangulation, "There does not seem to be a reasonable definition of a constrained Delaunay tessellation in three dimensions. Unconstrained Delaunay tessellations can be generalized to higher dimensions, but the result is always convex. If the initial local boundary loop is non-convex, some of the n-simplices which result will intersect valid n-simplices which are external to the local boundary loop. This violates the global tessellation. This requires that we implement an n-simplex classification system in order to handle non-convext regions. The n-simplex classification problem has two possible solutions: For angle summation we need an ordered boundary, and it may be restricted to 2D problems. Ray intersection algorithms can be applied to both 2- and 3-D problems, and additionally does not require an ordered data structure. However, ray intersection algorithms allow degenerate cases, such as when a ray is tangent to a boundary edge. An alternative method uses a postprocessing scheme which preserves the topology to classify the valid n-simplices. The decimation algorithm needs an arbitrary collection of vertices as its minimum input. The connectivity doesn't need to be specified, but can instead be computed using any general tessellation algorithm. Non-convex geometry and topology with holes can be specified by explicitly specifying the vertex and connectivity information. In order to determine how robust the algorithm is, extreme cases are tested. The conditions used are that each boundary vertex is retained, and each interior vertex is a candidate for removal. All non-boundary vertices can be deleted. Most of the time, practical applications will use less severe decimation criteria. Criteria used to determine whether a vertex may be removed can be based on geometric properties or any scalar governing function specific to the application. Different criteria are usually required for rendering and analysis applications. Given a planar tessellation, for each vertex v in the candidate decimation list: Step two in the above algorithm generates a candidate tessellation and identifies the set of valid triangles that tessellate the hold created by vertex decimation. First create an unconstrained Delaunay triangulation over the set of adjacent vertices that define the local boundary loop. Every adjacent vertex is initially connected to the candidate decimation vertex v by an edge. The convex hull of the local triangulation loop may not coincide with the local boundary loop, so we must determine which candidate triangles lie inside the boundary. From this set we must determine the remaining interior triangles. If all of the decimation criteria have been met for the candidate vertex, we can apply the following algorithm the hole which will be created by removing it: The above figure shows the contents of the Valid, Interior, and Exterior stacks corresponding to Phase 1 and Phase 2 of the classification algorithm described above. In this example, the local boundary loop encloses the triangles labeled 0, 1, 2, 4, and 5. Unshaded triangles denote that the triangle is on the Valid stack. Lightly shaded triangles reside on the Interior stack, and darker triangles reside on the Exterior stack. In Phase 1, all successful classifications share one thing - each interior triangle exclusively shares at least one original boundary loop edge. Complex convex and non-convex geometry is the reason why Phase 2 of the classification algorithm is required. After Phase 1, we cannot determine whether triangles which share no original boundary edges should be members of the Valid stack. These elements are put on the Interior stack. Triangles which do not exclusively share at least one original boundary edge are put on the Exterior stack, to be considered in Phase 2. This local tessellation algorithm fails if an exclusively shared, original, local boundary loop edge does not exist in the local tessellation. This isn't an anomaly, which can be shown by a relatively trivial non-convex topology illustrated in the following figure. Since the Delaunay tessellation algorithm is guaranteed to return a candidate local tessellation bounded by its convex hull, Phase 1 of the classification algorithm will fail. The "failed" candidate decimation vertex will be queued for processing later. As soon as one of its adjacent vertices is removed, the "failed" vertex decimation candidacy is automatically renewed. A general 3D surface decimation algorithm was developed based on the new planar decimation algorithm described above. To show a straightforward extension to surface decimation, Renze and Oliver implemented the local projection plane and the distance-to-plane criterion which was developed by Schroeder, Zarge, and Lorensen in their paper Decimation of Triangle Meshes. Area-weighted normals of the triangles incident at the candidate decimation vertex are used to compute a local average plane. The 3D surface points defining the local boundary loop are projected onto the average plane, which allows the use of the local planar decimation algorithm described in the previous section. It would appear that, in general, it isn't possible to apply a 2D triangulation algorithm to 3D decimation. The use of a local projection plane may permit the projection of a simple polygon on the 3D surface to a nonsimple polygon on the projection plane. The failure case for this nonsimple polygon won't cause the decimation algorithm to fail. Instead, the candidate vertex will simply be rejected because of the existance of a topological violation. Failure of the original local boundary loop to be preserved is guaranteed, because a Delaunay algorithm can't return self-intersecting edges. With a given surface tessellation, for each vertex v in the candidate decimation list we do the following: The following images were supplied by Renze and Oliver in their paper to show the testing which the surface decimation algorithm was subjected to. In each case, the image on the left is the original image, and the image on the right is that produced when the original image was decimated. The distance-to-plane criterion is used to determine the candidate decimation vertices. Boundary vertices are degenerate topology are explicitly retained. The shaded figured were rendered using flat triangle shading. The first image is a human pelvis, which was constructed from 3D anatomical data. The original image contains 34939 vertices, which was decimated by approximately 85 percent in the resulting image. Correspondingly, the approximately 70000 triangle faces in the initial image was reduced to about 10000 in the output. The second image is an image of a satellite surface. The initial image contained 14112 vertices, of which about 21 percent were deleted in the final image. The number of triangle faces was reduced from approximately 25000 to about 19000. In this case, the CAD program which was used to create the initial image utilized an export format which clustered the majority of the vertices to define distinct features, such as sharp edges. This severely limits the number of vertices which can be reduced with the current decimation criteria while still preserving crispedge resolution. In the third image, the surface of a piston is illustrated. In this case, the resulting image is displayed as a wireframe in order to show how the vertices were preserved. The initial image, there were 15314 vertices, of which 76 percent were removed in the final image. Note that in the resulting image the vertices which remain seem to be concentrated in areas of high surface curvature, but gradual features tend to "fade out" as vertices are removed. This is based on the fact that the surface decimation algorithm is based on the current surface configuration, rather than the original. In the fourth and final image, a human head and torso is depicted. The image was reconstructed from 3D anatomical data, as was the first image. The initial image contained 121547 vertices, of which about 66 percent were removed in the final image. Likewise, the number of triangular faces was reduced from about 243000 to about 80000. This is where it starts to get more interesting, and more difficult (and more theoretical). The problem is that, given an unstructured surface definition, the problem of generating a valid volume tessellation of the interior domain is nontrivial. A set of points in three dimensions is always tetrahedralizable, except for the cases of a coincident, colinear, or coplanar vertex set. The dual property of the 3D Voronoi diagram can be used to construct a colume tetrahedralization. However, unlike the planar counterpart, a nonconvex volume defined by a constrained face set isn't guaranteed to be tetrahedralizable. Renze and Oliver state the problem more formally: Determine if a 3D polyhedron can be decomposed into a set of nonoverlapping tetrahedra whose vertices are vertices of the polyhedron. This problem was shown to be NP-complete by Ruppert and Seidel in their paper "On the Difficulty of Tetrahedralizing Three-Dimensional Nonconvex Polyhedra". An algorithm exists for tetrahedralizing a non-convex polyhedron by using Steiner points (points which are not vertices). This approach, while it works, goes against the goal of vertex-based volume decimation. In other words, adding points to tetrahedralize a constrained volume which was initially created by the removal of a vertex is counterproductive. Instead, the hole created by the removal of a vertex is filled using a general unconstrained Delaunay tessellation algorithm. Topological sufficiency conditions indicate the inability to tetrahedralize a hole. If these conditions aren't satisfied, then the candidate decimation vertex can't be removed at the current decimation step. Instead, the vertex is queued at the end of the current decimation list for reconsideration later. The volume decimation algorithm has less steps than the surface decimation algorithm. In this algorithm, given a volume tessellation, for each vertex v in the candidate decimation list, we do the following: The local tessellation algorithm which was described earlier can be generalized to higher dimensions fairly easily. The general local tessellation algorithm presented by Renze in his dissertation "Unstructured Surface and Volume Decimation of Tessellated Domains" is similar to the planar tessellation algorithm, except that triangles are replaced with tetrahedra, and edges are replaced with triangular faces. In other words, triangles are replaced by n-simplices, and edges become (n-1)-simplices. The problem illustrated by the star-shaped polygon which caused failure in the planar decimation algorithm persists in this algorithm, with the additional complexity in identifying the number of valid insertion tetrahedra, and reconstructing the local boundary face loop. The deletion of an interior vertex from a general surface topology requires that the size of the Valid stack be exactly two less than the original number of incident triangles. This topological sufficiency condition is derived from Euler's formula, and can't easily be extended to volume tetrahedralizations. Instead, Renze and Oliver developed a general local tessellation algorithm convergence mechanism. If the Phase 1 classification produces an empry interior (n-1)-simplex set, then the hole region is convex or the candidate tessellation boundary is interpreted to be topologically inconsistent. "Convexity" can be tested by comparing the number of candidate n-simplices to the number of n-simplices placed on the Valid stack. The Phase 1 classification usually produces a non-empty interior (n-1)-simplex set for n>=3. The Phase 2 classification must result in an empty interior (n-1)-simplex set when it exits, in order to guarantee the topological sufficiency condition. The deletion of a vertex v can result in an increase in the number of tetrahedra which tessellate the the original local boundary loop, when compared to the number of tetrahedra formarly incident at v. This means that a reduction in the number of vertices can result in an increase in the number of tetrahedra, although this doesn't usually happen. The volume local tessellation can fail even in the event that the constrained geometry is untetrahedralizable. Look at the case of a unit cube which has an interior vertex v. Deleting v would create a degenerate condition for the unconstrained Delaunay tessellation algorithm, since more than four vertices would lie on the same circumsphere. The choice of the diagonal which determines the two faces on each side of the convex hull is not unique. Therefore, the existance of a nonempty interior (n-1)-simplex (face) set can be equated to inconsistant topology, if only a portion of the original boundary loop faces exist in the candidate volume tessellation. This local boundary loop ambiguity can be overcome with some additional logic. The figure below shows incremental stages of a unit cube volume decimation. Shaded regions show tetrahedra which were rejected during Phases 1 and 2 of the local tessellation classification algorithm for the current candidate decimation vertex. The initial volume tessellation consisted of 41 vertices and 189 tetrahedra. In the end approximately 70 percent of the original vertices were removed. The following table shows the results of the decimation of nine different test geometries. The initial tessellations were generated using wither a Delaunay or Steiner tessellation technique. For example, the heat sink mesh included 7938 vertices and 32939 tetrahedra initially. The curved channel started with 23255 interior vertices, 117126 tetrahedra, and 7048 boundary vertices. In all the cases nearly 90 percent of the original interior vertices were successfully removed. Note that the interior vertices in the Delaunay meshes are ultimately decimated more than 99 percent. This is significantly better than the Steiner tessellations in most cases, which may suggest that the Delaunay-based local tessellation algorithm is more efficient for decimating initial Delaunay tetrahedralizations then for initial Steiner volume tessellarions. Now that we can decimate planar geometries, 3D surfaces, and volumes, we need a way to be able to determine just how good the performance of the decimation algorithms discussed above is. The decimation performance statistics given in Table 2 were produced by Renze and Oliver on a Silicon Graphics 150MHz R4400 machine with 128 MB of RAM. The statistics reflect the average time which was required to identify a suitable interior vertex for decimation, produce a candidate local tessellation, classify the valid n-simplices that will preserve the local boundary loop topology and geometry, and finally update the connectivity for the current state. The decimation velocity is defined to be the number of interior vertices removed from the tessellation per CPU second. The average decimation velocity remains relatively constant until the boundary limit is approached. The number of incident triangles per candidate decimation vertex ranged from 3 to 720 for the planar and surface geometries tested. There was an upper bound observed of 10 to 15 triangles per local boundary loop. The unstructured decimation algorithm described earlier can be used to greatly reduce the complexity of 2D and 3D triangulated meshes and 3D tetrahedral volumes. As shown by the development and test cases, it's robust, local, and n-dimensional. The authors conclude by making the following points: While there are some problems with the algorithm, most of the time they'll never be seen. In general, decimation will allow us to reduce the complexity of an image, which in turns allows us to render that image with much greater speed. Copyright © 1997 by John Pawasauskas
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948529738.38/warc/CC-MAIN-20171213162804-20171213182804-00235.warc.gz
CC-MAIN-2017-51
19,595
53
https://forums.theregister.com/forum/all/2020/10/31/ai_covid_cough/#c_4137482
code
Claims like this need robust justification. A sample of five thousand (effectively divided by two of course) seems rather small, and I'd really like to know the false positive and false negative rates. I'd also like to know the number of trials per subject and their distribution as well. Sadly the journal is paywalled so I can't find out. However, quite apart of the honesty of the subjects, I can think of quite a few distorting factors that could contribute to uncertainty.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358966.62/warc/CC-MAIN-20211130080511-20211130110511-00384.warc.gz
CC-MAIN-2021-49
477
1
https://www.programmableweb.com/api/altairis-palehorse-rpc-api/developers
code
Altairis, also known as Altair Communications, is a Czech IT company. PaleHorse is their mailing list management system. It allows users to retrieve information about a mailing list, add or remove subscribers, determine whether someone is a subscriber, send a message to a mailing list, and so on. These functions are accessible programmatically using SOAP calls issued in XML format. Although PaleHorse and its documentation are provided in English, the Altairis website itself is offered exclusively in Czech.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711077.50/warc/CC-MAIN-20221206092907-20221206122907-00603.warc.gz
CC-MAIN-2022-49
511
2
https://m.aptech-education.com/courses-accp-pro-e-commerce.aspx
code
ACCP Pro in e-commerce is a career program designed to equip students with professional skills required to make a successful career in the booming e-commerce industry. The course trains you in industry relevant skills such as web productivity tools, e-commerce implementation, logic building & elementary programming, and building next generation websites. Join our ACCP Pro - E-commerce program to develop stunning e-commerce websites & build a high-paying career. Take the first step to a successful career. for free career counselling.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315750.62/warc/CC-MAIN-20190821022901-20190821044901-00375.warc.gz
CC-MAIN-2019-35
538
5
https://vezwork.itch.io/its-a-date-then
code
It's a Date Then HTML5 (Chrome only :/) This is my first Ludum Dare. I hope you like my game :). The incompatible genres I tried to capture were: multiplayer, dating simulator, (minigames). Challenge your friend to work together towards a stable relationship, as a crab and a moai head. Play 2 multiplayer minigames to test your friendship. Leave a comment Log in with itch.io to leave a comment.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00382.warc.gz
CC-MAIN-2023-06
396
6
https://pugchallenge.org/speaker/peter-judge/
code
Software Architect, Consultingwerk Peter Judge is a software architect at Consultingwerk, and has been working with Progress since 1996. Over the years, he has gone from writing reports, to building customer applications, to writing frameworks, to designing and building libraries, and generally makes stuff work together. Peter’s background is in application and application toolkit design and development, and more recently has worked on server and integration technologies, but his interests span the whole SDLC from design through to deployment, regardless of language and technology. Peter is a frequent poster to various forums, a regular speaker at conferences and user groups, and believes that we only grow individually if we all grow together.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510358.68/warc/CC-MAIN-20230928031105-20230928061105-00017.warc.gz
CC-MAIN-2023-40
755
3
https://www.ironcastle.net/json-all-the-logs-mon-aug-8th/
code
My recent obsession has been creating all of my logs in JSON format. The reasons for that are pretty simple: I like to log with Elasticsearch, so creating JSON formatted logs makes working with Elasticsearch easier. Command line tools like 'jq' make parsing JSON logs on the command line simpler than "good old" standard Syslog format and a string of 'cut,' 'sed,' and 'awk' commands. JSON All the Logs!, (Mon, Aug 8th) This post was originally published on this site
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00712.warc.gz
CC-MAIN-2023-14
467
3
https://www.alfresco.com/abn/adf/docs/process-services-cloud/start-process-cloud.service/
code
Gets process definitions and starts processes. You can use the startProcess method in much the same way as the startProcess method in the Process service (see the Process service page for an example). However, the cloud version combines the process details and variables conveniently into the
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00005.warc.gz
CC-MAIN-2019-09
292
7
http://shelldrakewargames.blogspot.com/2011/10/dungeon-bash.html
code
For a number of years I have collected to odd fantasy figure with the intention of playing a fantasy game. Recently I have been taking a hard look at the Hirst Arts bits and pieces with the intention of creating a set of tiles for dungeon exploration games. As such, I figured I needed to design a dungeon so I would know how many pieces of pre-made Hirst Arts pieces I would need. The good thing about the Hirst Arts web site is they provide many great instructions and ideas for making the type of dungeon that I want to make: Hirst Arts Dungeon Being a little bit lazy I thought I would see what I could find on line to help me with my dungeon designs. Fortunately for me it appears that role players are just as lazy and there were a few free online dungeon generators on line for me to choose from. Out of all the online dungeon generators I looked at, these three fit the bill best for my wants / needs: The first one I looked at seemed to be the best. If allows you to construct different sized dungeons, add stairs, secret doors and you can remove any dead ends it may generate which prevents silly and pointless parts to a dungeon. The generator even numbers and fills the rooms with beasties for you, but I assume that is for a particular set of rules and I will just ignore the text part of the dungeon. The program allows you to save the dungeon as a PDF, and save an image for a player map with out the secret bits: Random dungeon generator 1 |Player's Map for random dungeon| The next one is fairly good and is useful for generating a lot of rooms. Unlike the first one the layout is not always logical: dungeon generator 2 This one is very much like the first one, although I can't see a function that lets you print it out. Having said that, it does give you a link so you can create it again if need be: Dungeon generator 3 One function I like is that you can determine the biggest room size, which I thought was a great function. It doesn't add stairs though. Having created a dungeon, I will print it out and start planning the game tiles I need for my game. For rules I might use the "Song of Blade and Heroes" or even keep it very simple and use the old "Fighting Fantasy" choose your own adventure system, or a little RPG game I have called "Dragon Warriors" that has since been reprinted by Mongoose Publishing and this year to be brought out by a company called "Serpent King Games". I have the original rules books, so I will use those. There is a great possibility that I will run games of this as a "Play By Blog" for followers of my blog very much like the zombie game being played on my zombie blog.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589573.27/warc/CC-MAIN-20180717051134-20180717071134-00450.warc.gz
CC-MAIN-2018-30
2,628
14
http://www.phd-positions.dk/?p=189784
code
Applications are invited for a funded 3-year position as PhD candidate (SKO 1017) in the Computational Biology and Gene Regulation Group at the Centre for Molecular Medicine Norway (NCMM), University of Oslo, a Nordic EMBL partner for Molecular Medicine. The research group is led by Anthony Mathelier. The group was established in 2016 and focuses on gene expression regulation and the mechanisms by which it can be disrupted in human diseases such as cancers. In a nutshell, the group develops and applies computational approaches to analyse in house (through collaborations with groups at the Institute for Cancer Research, where the group is also affiliated) and public multi-omics data to study gene expression dysregulation. Amongst others, our current projects aim at developing new computational methods and tools for (1) improving the prediction of transcription factor (TF) binding sites; (2) prioritizing somatic mutations dysregulating the gene regulatory program in cancer cells; (3) understanding the interplay between TF binding and DNA methylation in cancers, (4) characterizing the landscape of active promoters and enhancers in breast cancer; and (5) assessing the transcriptional impact of transition from diploid to aneuploid cells in breast cancer. See http://mathelierlab.com for further information. More about the position We seek an individual highly motivated by the development of computational models and tools dedicated to the analysis of biological data. We are looking for applicants excited about combining life sciences and computation to analyse gene expression regulation. The ideal candidate will be collaborative, independent, with strong enthusiasm for research, and should have experience in statistical / machine learning modelling and computer programming (mainly Python, R, and bash) dedicated to the analysis of large-scale genomics data. Being familiar with gene expression regulation in general, DNA methylation, and TF binding is an advantage. The selected candidate will develop computational models to study the binding of TF to DNA and how mutations in cis-regulatory regions can trigger carcinogenesis. The position is open to applicants with a background in computational biology/bioinformatics, computer science, or related fields. We offer a stimulating environment with excellent working and social benefits. The main purpose of the fellowship is research training leading to the successful completion of a PhD degree. - Master degree in computational biology, bioinformatics, biostatistics, or a related field - Proficiency in programming (Python, R, bash) - Ability to collaborate with researchers from different fields and at different career stages - Willingness to be part of a team to share knowledge and skills - Experience with analysis of genomics data sets - High drive for science - Proficiency in English - Knowledge of eukaryotic gene expression regulation is an advantage - Knowledge of molecular biology is an advantage - salary NOK 449 400 – 505 800 per annum depending on qualifications in a position as PhD Research fellow, (position code 1017) - a professionally stimulating working environment - attractive welfare benefits and a generous pension agreement, in addition to Oslo’s family-friendly environment with its rich opportunities for culture and outdoor activities How to apply The application must include - cover letter (outlining motivations, career goals, past achievements, and research interests) - CV (summarizing education, positions and academic work) - copies of educational certificates (academic transcripts only) - list of reference persons: 2-3 references (name, relation to candidate, e-mail and phone number) The application with attachments (PDF format) must be delivered in our electronic recruiting system, please follow the link “apply for this job”. Foreign applicants are advised to attach an explanation of their University’s grading system. Please note that all documents should be in English. Please see the guidelines and regulations for appointments to Research Fellowships at the University of Oslo. According to the Norwegian Freedom and Information Act (Offentleglova) information about the applicant may be included in the public applicant list, also in cases where the applicant has requested non-disclosure. The appointment may be shortened/given a more limited scope within the framework of the applicable guidelines on account of any previous employment in academic positions. The University of Oslo has an agreement for all employees, aiming to secure rights to research results etc. Inquiries about the position can be directed to group leader Anthony Mathelier: [email protected] Inquiries about application/how to apply can be directed to HR-adviser Nina Modahl: [email protected] About the University of Oslo The University of Oslo is Norway’s oldest and highest ranked educational and research institution, with 28 000 students and 7000 employees. With its broad range of academic disciplines and internationally recognised research communities, UiO is an important contributor to society. Centre for Molecular Medicine Norway (NCMM) was established in 2008 and is the Norwegian node in the Nordic EMBL Partnership for Molecular Medicine. NCMM is a joint venture between the University of Oslo, Health Region South-East and the Research Council of Norway. From 2017 NCMM is merged with the Biotechnology Centre of Oslo and now has altogether 11 research groups. The overall objective of NCMM is to conduct cutting edge research in molecular medicine and biotechnology as well as facilitate translation of discoveries in basic medical research into clinical practice. Apply for this job Read the job description at the university homepage Post expires on Sunday March 17th, 2019
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578633464.75/warc/CC-MAIN-20190424054536-20190424080536-00386.warc.gz
CC-MAIN-2019-18
5,823
37
https://forums.theshow.com/topic/32208/first-inning-programs-sds
code
Because of all the server issues, I propose that SDS give EVERYONE the max rewards of the 1st inning program. Just call it a WASH! Bro that’s a lot of stuff for server outages, we should get at least 30k stubs and at least 5 player packs seems more reasonable. But I get your point I just was hoping to enjoy the show for the first time ever and have to deal with this... This stuff is ridiculous. Like I'm not a professional gamer. I don't have all the time in the world to play and it seems like everytime I go to play, there is an issue. Tonight it can't to their server. Usually it's their servers can't handle it
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00365.warc.gz
CC-MAIN-2022-21
619
4
http://blog.ronhebron.com/2015/02/red-light-and-school-zone-cameras-in.html
code
Watch out! Seattle needs money and they are happy if you provide it. Of course, they try to remember to talk about safety when putting money-generating cameras on traffic lights and school zones. But Seattle needs money. Seattle Times has a map of the locations. Though the map doesn’t load for some reason. But when you click on an icon it tells you the location - and how many thousands of tickets it has generated. Seattle Times. They say the data is as of June, 2013.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00175-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
473
3
http://alantechreview.blogspot.com/2013/01/built-in-pdf-viewer-in-chrome-much.html
code
Adobe Acrobat has a new security hole found every day (OK, slight exaggeration), and is slow and bloated. The appeal of a light-weight in-browser PDF viewer is undeniable. Now both Chrome and Firefox offer this option. But the Firefox viewer produces blurry text, and I do not recommend for anything but quick on-screen previewing. The Firefox PDF viewer does not support "sub-pixel" font rendering which means that any text smaller than about 16 points looks blurry and is hard to read. If your only goal is to print the PDF after checking that you downloaded the right one, it's certainly good enough, but if you want to save a tree, you'll soon get tired of reading such blurry text, unless you like to read each page zoomed in so that only 1/2 of it fits on-screen at a time. I'd provide a screen shot but "sub-pixel" rendering means splitting each pixel into its red/green/blue components and turning on more of them or less depending on the width and locating of the stroke that passes through that pixel. So it only looks good if you know what order the colors are arranged on your LCD (RGB, GBR, etc) and if you guess wrong it actually looks much worse. So take it from me. Or, if you want to see yourself, here's the best step-by-step instructions I've found: After following those steps I still needed to go into the browser preferences and switch the content handler for PDFs from external to "Firefox". Perhaps that is all you really need to do, I don't know. Chrome supports sub-pixel rendering, and I find I can read an 8x11 page full-screen on my 24inch LCD with no problem. The same is true with Acrobat, of course, but Adobe's product is just too unsafe to use. The only time I've ever gotten anything like a virus was from a hole in Acrobat (that has happened twice to me, in over 20 years of computer use, meanwhile obsessive virus scanning has NEVER turned up a single virus on any file I've downloaded from the Internet).
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658662.31/warc/CC-MAIN-20190117000104-20190117022104-00282.warc.gz
CC-MAIN-2019-04
1,942
6
http://little-bean-sprout.tumblr.com/
code
have I talked about how my two cats love each other so much and they literally do everything together and they’re always piled all over each other like even when they’re not sleeping they’re just hanging out Sometimes I wonder whether I have any real intelligence or if I just have enough random bits of surface knowledge to bullshit my way through most things. Gender stereotyping in the English language I was doing my reading for class and I came across this and was shocked at how accurate it was. I consistently forget these tricks. Now I have a visual. Thanks, Internet. I wish I’d known this in undergrad. Sending this to my coworkers on Monday.
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131846.69/warc/CC-MAIN-20140914011211-00086-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
CC-MAIN-2014-41
660
8
https://developer.arm.com/docs/ddi0388/f/system-control/register-descriptions/tlb-lockdown-register
code
The TLB Lockdown Register characteristics are: Controls where hardware translation table walks place the TLB entry. The TLB entry can be in either: the set-associative region of the TLB the lockdown region of the TLB, and if in the lockdown region, the entry to write The lockdown region of the TLB contains four entries. - Usage constraints The TLB Lockdown Register is: only accessible in privileged modes. common to Secure and Non-secure states. not accessible if NSACR.TL is 0. Available in all configurations. See the register summary in Table 4.22. Figure 4.12 shows the TLB Lockdown Register bit assignments. Table 4.23 shows the TLB Lockdown Register bit assignments |||P||Preserve bit. 0 is the reset value.| To access the TLB Lockdown Register use: MRC p15, 0,<Rd>, c10, c0, 0; Read TLB Lockdown victim MCR p15, 0,<Rd>, c10, c0, 0; Write TLB Lockdown victim Writing the TLB Lockdown Register with the preserve bit (P bit) set to: Means subsequent hardware translation table walks place the TLB entry in the lockdown region at the entry specified by the victim, in the range 0 to 3. Means subsequent hardware translation table walks place the TLB entry in the set-associative region of the TLB.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371858664.82/warc/CC-MAIN-20200409122719-20200409153219-00462.warc.gz
CC-MAIN-2020-16
1,203
21
http://definedmeaningof.com/meaning-of/pyestyofeu-eechyachu/meaning-of-the-name-pyestyofeu-eechyachu-means
code
PYESTYOFEU EECHYACHU name means: Go to Bing homepage Sign inRewards All Images Videos Maps News | My saves Français There are no results for PYESTYOFEU EECHYACHU Check your spelling or try different keywords Make Bing your search engineGet smarter search by adding the Bing New Tab with Search extension © 2019 Microsoft Privacy and Cookies Legal Advertise Help Feedback Log in Sign up You are on Twitter Mobile because you are using an old version of Firefox. Learn more here Enter a topic, @name, or fullname No results for PYESTYOFEU EECHYACHU Back to top · Turn images off Like to add another meaning or definition of PYESTYOFEU EECHYACHU? Zodiac Sign Names, sentences and phrases similar to PYESTYOFEU EECHYACHU MURDANN means: Feminine form of Scottish Murdoch, MURDANN means "sea warrior." Najib means: Intelligent Daylan means: Hollow; Valley; Rhyming Variant of Waylon; A Historical Blacksmith with Supernatural Powers Chivaperuman means: One of the World Lord Shiva Aademma means: Youth Ghansa means: Soft; Grass Aftan means: More Attractive; Charming Brewstere means: One who Brews Ale; Brewer Kaelah means: Keeper of the Keys; Pure Okke means: Wealth; Fortune Tags: Horoscope & Zodiac Meaning of PYESTYOFEU EECHYACHU. Meaning of PYESTYOFEU EECHYACHU. Did you find the numerology meaning of the name PYESTYOFEU EECHYACHU? Please, add a meaning of PYESTYOFEU EECHYACHU if you did not find one from a search of PYESTYOFEU EECHYACHU. Copyrights © 2016 . All Rights Reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247487624.32/warc/CC-MAIN-20190218175932-20190218201932-00072.warc.gz
CC-MAIN-2019-09
1,485
17
https://www.talkbass.com/threads/what-software-for-p-q-encoding.129329/
code
I'm currently using Nuendo 1.53 for all my recordings. Well, I need to be able to encode my finished recordings so that a standard cd player will recognize individual tracks. The recording is done like Pink Floyd's "The Wall" and there isn't always a gap between songs. Any help would be greatly appreciated. Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00348.warc.gz
CC-MAIN-2018-22
316
1
https://logicgate.workable.com/j/D375C4FBC8
code
What we do LogicGate creates flexible and beautiful business process software that helps organizations reduce risk and improve compliance. We are disrupting the Governance, Risk, and Compliance and Business Process Management software industries by providing a solution end users can self-manage, saving enterprises huge amounts of time and money. Our customers use LogicGate to visually design end-to-end workflows and create highly configurable process applications that place controls around mission-critical activities. Think of it as an “if this then that” for companies. Who we’re looking for As a young company in a software space filled with clunky and dated interfaces; we see user experience and interface aesthetics as the foundation of our success. We’re looking for a front-end developer who can’t stand poor UX and wants to blow the minds of our customers with gorgeous pages and slick design. We want someone who doesn’t see “functional” as complete, but will strive to make new features beautiful. Our ideal candidate is an opinionated leader who loves the challenge of designing something new and is bored with the regurgitation of existing constructs. It’s essential that team members are able to take ideas and run with them, relying on experience and creativity. What you’ll do - Design and develop new user-facing features - Ensure the technical feasibility of UI/UX designs - Optimize front-end application for speed, reliability, and scalability - Investigate and implement new technology to keep the stack current - Write tests to facilitate an efficient dev cycle and a bug-free application - Collaborate with a small team to birth new ideas What we use - Work: Jira - Slack - Gitlab - G Suite - CI/CD: Gitlab CI - Docker - AWS - Back: Spring Boot - Java 8 - Kotlin - Neo4j - MySQL Why you’ll like it You’ll help create best practices instead of just following along. You’ll have the freedom to explore new technologies and help make fundamental product decisions as the application and client base grows. Our culture is fast-paced and driven by a passionate team, but allows for the flexibility you’d expect from an early stage startup. As one of the team’s early hires you’ll have the option to take equity in an already well-funded Chicago startup. You’ll have a strong voice in a team of co-founders and engineers. This is a rare opportunity to play a key role in building a business, have a direct impact on the product, as a part of a collaborative and high-performing engineering team. We’re nestled in the corner of river north on the second floor of 214 W Ohio Street. Our kitchen is typically well stocked with a variety of interesting “La Croix” flavors, snacks, and drinks for impromptu happy hours. We offer the following benefits: - Healthcare (PPO and HMO) - Transit Benefits
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217901.91/warc/CC-MAIN-20180820234831-20180821014831-00192.warc.gz
CC-MAIN-2018-34
2,855
22
https://www.programcreek.com/java-api-examples/?project_name=erickok%2Fratebeer
code
Evolution and simplification to bring the old RateBeer for Android app up to modern Android code and design standards, while being more reliable and less prone to ratebeer.com server changes and instabilities. The project should compile without any additional work, as all dependencies are managed through Gradle. The Android Jack toolchain is used to support Java 8 style labdas. Android devices with 4.0.3 (API level 15) and up are supported, without any tablet-specific interface for the moment. The project uses RxJava heavily to manage data streams (loading, caching), async processing and event notifications. The source data is defined by the RateBeer JSON API only, for which limited documentation exists. Most data shown in the app is temporarily cached in the database, with a fixed time limit to refresh data. Offline ratings of the signed in user are stored in the database after a manually-started sync (managed by a background Service). Copyright 2010-2016 Eric Kok et al. RateBeer for Android is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. RateBeer for Android is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with RateBeer for Android. If not, see <http://www.gnu.org/licenses/>. See LICENSE.txt for the full license. Remember that, practically speaking, you can NOT use this code without publishing the derivative code under the GNU GPL v3 license as well. Thanks for sharing! Some code/libraries/resources are used in the project:
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00251.warc.gz
CC-MAIN-2022-40
1,871
5
http://doofercall.blogspot.com/2012/07/solr-replication-invalid-version-error.html
code
invalid version expected 2 but 10 or the data in not in 'javabin' formatI first saw it when I moved between versions of Solr, because versions 3+ use a different format. Most things I read were about this issue, and it was easy enough to resolve (I just made sure all my Solrs were on the same version - not so easy for everyone but worth the effort). But in one case I still had the error, and this whole javabin thing turned out to be a red herring. The answer (from here) is simply that the configuration of the slave included "admin" in the replication URL, which is not where the replication service itself exists. Doh! There was indeed something at the http://www.example.com/solr/admin/replication/ address, but it's the admin interface to replication rather than the endpoint itself, which was at http://www.example.com/solr/replication/. I'm not sure how the error crept in for me but clearly I'm not the only one! Perhaps I snipped it from some faulty documentation, but all those false positives going on about javabin format kept me busy trying all sorts of other things before I stumbled across the right answer. Anyway, check your solrconfig.xml on the slave to be sure your masterUrl string doesn't include "/admin" in the path. - Web person at the Imperial War Museum, just completed PhD about digital sustainability in museums (the original motivation for this blog was as my research diary). Posting occasionally, and usually museum tech stuff but prone to stray. I welcome comments if you want to take anything further. These are my opinions and should not be attributed to my employer or anyone else (unless they thought of them too). Twitter: @jottevanger Friday, July 13, 2012 Solr replication "invalid version" error A quick one for googlers: I had this error in my catalina log when trying to replicate a Solr index:
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605485.49/warc/CC-MAIN-20170522171016-20170522191016-00262.warc.gz
CC-MAIN-2017-22
1,840
5
https://ralsina.me/categories/linux-17.html
code
How to write a tiny python app (less than 50 lines) that reacts to events on the DBUS buses. For example, displaying a notification when you press one of your keyboards' special keys. I will be doing a brand-new never seen introduction to PyQt programming at the "Jornadas de Software Libre y Open Source" in Mar del Plata tomorrow or the next day. More info at http://softwarelibre.mdp.utn.edu.ar/ If you mention this blog and ask nicely, you get a can of cheap national beer tomorrow night (limit: 2 cans ;-) Most people nowadays have more than one computer. Often, you are using one, and would like to do something in another. In this video, I will explain how trivial it is to do that without leaving your seat in a modern Linux using KDE. We will use the following: Avahi, a zeroconf implementation to let you find your computers in your network without worrying about IP addresses, DNS, etc. krfb, the KDE Remote Frame Buffer. This is a program to share your desktop over the network. krdc, the KDE Remote Desktop Client, a VNC, RDP client, which is what you use to see a desktop shared via krfb. I am sure users of other operating systems or desktop environments will say they can do it just as easily. In that case, feel free to do your own videos ;-) Keep in mind that accessing remote desktops over the internet is a whole different beast, and this solution is not meant for that case. As usual, this video was recorded using qt-recordmydesktop. There was minor editing using mencoder. The computer used is the original Asus eee PC 701 4G, so you can see this is not exactly a hardware-intensive operation. I find the eee's small screen is great for this kind of full-screen demo, because it's not big enough to drown the important parts. While randomly clicking today I saw this article with the fun title of "2009: software installation in GNU/Linux is still broken -- and a path to fixing it" by Tony Mobily. As I don't have an account at FSM and don't intend to open one today, I will post a response here, where noone will ever notice it. Because that's how self-confident I am. I warn you, this response is full of proof-by-assertion and invalid generalizations, but hey, it's opinion. It starts with an interesting claim: And yet, software installation in GNU/Linux is broken. No, not broken… it’s terribly broken. Why is that, and what can be done to fix it? I intend to show how there is one little problem with that statement: reality. As in "what's the most successful software delivery platform for 'normal' users in 2009, and why that matters here" reality (hint: it's not windows, it's nothing on a Mac). I mean the iphone, and its App Store. And what he is proposing is abandoning something that works a lot like it, and go back to the cave-age "download this from the app's site" model windows uses. He describes the usual installation procedure, be it apt, yum or pacman, almost all distros do the same thing nowadays, then starts describing the problems. Users need to have root access in order to install a piece of software; no per-user installation is allowed This would have been a problem in the old days of shared computers. Nowadays, you are root, or (in some cases) root's mom. All modern distros use sudo so you don't need root access, you just use your own password (or no password at all). So, this is a non-issue. It’s very tricky to install several versions of the same piece of software. Just think of the poor graphic designer who needs to install several versions of Opera and Firefox; Users are stuck with the piece of software installed system-wide; How do you install iFart 1.0 and 2.0 in an iphone? Can you? Special users (like graphics designers) should be provided by special solutions, like portable firefox. Think there's a need for it? Go ahead, it's maybe one day of work. The software needs to be downloaded from the official repositories. Well, it doesn’t need to, but an average user wants to stay well away from unofficial repositories for technical reasons; The iphone software needs to be downloaded from the app store. That has not hurt anyone (other than app developers, I mean) In some cases (especially when the user adds repositories or installs packages directly), the dependency-checking mechanism often fails and users end up with circular dependencies. They are nasty; I have not seen a circular dependency since 2003. But hey, maybe they do exist. In that case, don't do that. A piece of software is bound to a specific distribution, and — what’s worse — to a specific version of that distribution too. It’s not trivial to install Openoffice 3.1 on Ubuntu 8.10. You can argue that you can install the bunch of .deb packages from OpenOffice’s web site. Tell that to your grandmother or your average inexperienced computer user. Some iphone apps will not work without the 3.0 firmware. Solution? Upgrade your firmware. How's that different in Linux? It’s not trivial to “give” a program to a friend. To the end user, giving a program to a friend should be as simple as dragging an icon onto a memory stick; instead, files are scattered all over the system. It'seven simpler than that: here, let me give you rst2pdf for ubuntu: "Tony, install rst2pdf, it's in 'universe'". I don't even need to give you a floppy disk or whatever kids use these days. Oh, and "how do you give someone an iphone app?" Then, he proposes solutions, which I think are worse than the status quo: Users should be able to install software even without being root Users are root, or root's mom. This is not a problem. Users should be able to install different versions of the same software immensely easily The only way to do this is to make everything static or freeze the systems libraries. Do this for "portable" versions of specific apps if you want. Don't make the system suck, please. Users should be able to run either their own version of the software, or the one installed on the system (if any) Fine by me. OTOH, how do you do that on an iphone? You don't. Please, stop diverging from what is working in 2009. It should be possible to download and install software even though it doesn’t come from an official repository Again, the iphone doesn't, and noone cares. If you care, you are not the normal user you claim to advocate. Software just needs to work — unchanged — even if it’s a bit old and it’s run on a newer distribution Yes, I want Quick Shot for iphones with 3.0 firmware!. Again, almost noone cares, except app developers. You are acting like a Linux user, dude! It should be possible to “give” a piece of software to a friend by simply dragging an icon onto a memory stick Just tell the guy the name of the app already! All this is true with Apple’s OS X. They got software installation just right — although a few programs, lately, seem to come with an ugly installation process. Noone chooses a mac for this. They choose it because it's pretty, or because they are told they are better, or whatever, but anyone actually saying "hey, on a mac I can give you Office by sharing this 800MB dmg!"? Not only noone does it, it's freaking illegal. Besides, you give the dmg to your friend, and 92% of the time, your friend has windows or Linux and can't use it. So your chance of success is about 8%. Imagine if it only worked if your friend's computer was an Acer? There you would have a 10.5% chance! (see here) Noone would say that's good! No, sharin apps like that is not the problem. Please, dear linux distro developers, look around you, and see what works in 2009. Amazingly, what works looks an awful lot like what we have been enjoying since 1998 or whatever, so let's enjoy it and make it better, instead of trying to become like a mac, or like windows, old style platforms in the age of the smartphone and the netbook. I can't reply at the site because: It requires login. I can't find a place to get an account. It has freaking google ads popups So I reply here because: Hey, it's my own blog, so I can do whatever I want. Here's the comment by hussam with my (admittedly ranty) response: I've been using ArchLinux as my only distribution/operating system since early 2006. It is really a good distribution but lately there have been a lot of really bad choices which I call bad compromises: 1. Too many ArchLinux users think gnome/kde are bloat and instead install some half developed window manager and some terminal emulator and call it a "minimalist" desktop. Why is that any of your business? And what "compromise" is there? 2. Optional dependencies are the worst idea ever. If a package is linked against libsomething.so then libsomething should be a dependency not an optional dependency. Making libsomething an optional dependency just because "minimalist" users don't want to install dependencies is plain stupid. That's not what optional dependencies are for. For example, consider the example I mentioned, rst2pdf. It can use pythonmagick. It can also not use it. You will lose one small feature that AFAIK only one person ever used. If you need that feature, the manual tells you what to do: install pythonmagick. Maybe there should be a pacman option "install optdepends" which turns optional dependencies into regular ones. That would make you happy and keep others happy too. 3. Bad leadership. Aaron is fantastic guy but I know at least two ArchLinux developers who can do a much better job. That's just stupid and mean. 4. Too many ArchLinux users now like badly automation scripts like yaourt or whatever it is called. Parse error. And then again: yaourt is great. You don't like it? Act as if it doesn't exist and be happy. You seem to have a big problem ignoring people who disagree with you. That's a really, really serious personal flaw. I suggest you grow up. 5. Too many noobs who do dumb things like people adding their users to hal, disk and dbus groups. Sure, they should add themselves to optical and storage. So what? It's a simple problem and it has a simple solution. Then again, the adduser scripts probably should do that for regular user accounts. After all, who wants to create a regular user that can't use removable storage? And if said use case exists, that should be doable by removing the user, and not viceversa! On the other hand, I don't give a damn, because I can fix it trivially. The main reason why I don't think I will switch to another distribution soon is that creating Archlinux packages from scratch is very easy and the initscript system is really fantastic. All in all, ArchLinux is a really strong distribution now and it's constantly growing. I expect you, like most elitist poseurs, will run away when you feel Arch is too popular and accessible to "too many noobs" or some similar nonsense. Which, like the title says, is why you are a big part of what's wrong with Linux users.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473524.88/warc/CC-MAIN-20240221170215-20240221200215-00774.warc.gz
CC-MAIN-2024-10
10,849
81
http://www.linuxdig.com/howto/ldp/Virtual-Web.php
code
David Merrill - Conversion from text to DocBook SGML. A World Wide Web (WWW) Server is normally a single machine dedicated to processing HTTP requests for a single WWW site. Simply put, one WWW site per machine. Since the computing resources for processing httpd requests is low for most WWW sites, the majority of the computing resources are left unused. A virtual WWW site simple allows more than one WWW site to share a single processor. Instead of having www.domain1.com and www.domain2.com requiring two physical computing devices, www.domain1.com and www.domain2.com can be located on a single computing device and share common resources. Normally small computing facilities, and small businesses do not have the resources to maintain a dedicated web server and a dedicated Internet connection. These cost can easily start off at $10K for setup, and $500-2500 monthly to maintain. Small computing facilities, and small businesses are now able to "rent" WWW space from a Virtual WWW providers. The customer can then maintain the WWW "pages" using a local telnet and/or FTP connection. WWW providers such as InfoCom Networks http://www.infocom.net/ provide WWW space as low as $75 per month. A few Virtual Sites might clear up the mystery. So the cost of setting up a WWW site is significantly lower than that of setting up a dedicated server and connection. The Virtual Site has a major advantage over other WWW addressing schemes such as "www.yourprovider.com/~businessname". The Virtual WWW server inherently contains the ability to move to a new location or setup a dedicated WWW server without changing addresses. Changing WWW URL's can result in a major loss of traffic to your site, and lots of business literature updates. With most web sites, www.domain1.com and www.domain2.com both resolve to separate IP's. In order to accept multiple request from a single host, the virtual host must be able to answer request for both sites. The method used to solve this problem is called IP aliasing. IP aliasing allows a single host to accept request for multiple IP's. The virtual Web server must have the ability to alias IP's IP aliasing is just one part of the virtual solution. The Domain Name System (DNS) also must be configured to resolve both www.domain1.com and www.domain2.com. If domain1.com and domain2.com are new domains, then both must be registered with Internic. Currently, Internic is charging $50 a year to maintain your domain. Most virtual WWW sites should also provide virtual mail, or the ability to forward all mail to the virtual domain to another user or users. Virtual FTP or the ability to FTP using the standard host name "ftp.domain1.com" should also be configured by the WWW provider. Linux versions 1.2.X requires the IPalias patch alias-patch-1.2.1-v1 and alias-net-tools.tar. I'm not sure if 1.3.X supports this patch yet. For more information on the IPalias patch see ftp://ftp.mindspring.com/users/rsanders/ipalias/ Using multiple dummy interfaces has been suggested in place of the IPalias solution. While the dummy solution may work, it does not appear to be as clean as the IPalias solution. For more information on using Apache and the dummy solution see Aram Mirzadeh's virtual hosting information at http://www.qosina.com/apache/virtual.html All that is required to add a new alias using the IPalias method is: > /sbin/ifconfig eth0 alias www.domainX.com Also, the IPalias solution is supported on several other platforms. NCSA 1.5, Apache, and Spinner support Virtual hosting. http://hoohoo.ncsa.uiuc.edu/docs/Overview.html http://www.apache.org/ http://spinner.infovav.se/ Create a regular Linux account for the virtual customer with home directory and mail. Virtual Host implementations are still changing. A few patches exist to support Virtual Host Check the server's release notes for more details. NCSA 1.5 or Apache now include the Virtual patches, and I have been told that Spinner supports virtual hosts. One virtual patch supports the following srm.conf syntax, however the second NCSA 1.5 method of defining a Virtual host allows for greater flexibility NCSA and Apache support the following httpd.conf syntax: Once the IPalias patches have been installed add the following to your /etc/rc.d/rc.local on your local web server. If you are setting up a new domain or change a current domain, you must register the domain with Internic. The template can be found at ftp://rs.internic.net/templates/domain-template.txt Named will need to be configured so that your virtual domain will be visible to the outside world. I don't claim to be an expert on DNS. Suggestions always welcome. You should already have a db.xxx.xxx.xxx for your current site update it to contain the new virtual domains for reverse lookups Once you've finished editing config files, you will need to restart the named daemon. Your virtual customers will more than likely want the ability to have mail that is sent to their domain forwarded to another domain. A few sendmail.cf changes will do the trick. After several months of trying different sendmail changes, this is the 1st method that I found that works and requires only one sendmail.cf change for each new virtual site. Currently, I have not been able to get Virtual FTP to work. A few patches exist, and I'm sure a working patch exist. We just create a working directory /home/ftp/business/domain1, but a true Virtual FTP would be nice. If anyone would like to contribute a solution, I would be more than happy to add it here. Arnt Gulbrandsen has rewritten ftpd and has included support for independent FTP services The Troll Tech FTP Daemon
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00007-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
5,628
26
https://www.iapptweak.com/2019.01.unc0ver-v2-1-3-released-with-bug-fixes-and-improvements.html
code
Less than a week after launching unc0ver v2.1.1 to the general public, hacker and project lead developer Pwn20wnd has updated the iOS 11-centric jailbreak tool once again with bug fixes and improvements. Pwn20wnd revealed the updated jailbreak tool Saturday morning via Twitter, adding that unc0ver v2.1.3 as an updated public release for all users rather than a beta intended for public testing: unc0ver v2.1.3 is now out. https://t.co/xwxRUDYbqj — Pwn20wnd (@Pwn20wnd) January 19, 2019 Citing Pwn20wnd’s official GitHub page, unc0ver v2.1.3 encompasses the following changes: 01/19/2019 – v2.1.3 was released for production with the following changes: – Fix a bug in patch finder that affected the shenanigans finder on specific iOS versions – Switch to a better versioning system – Make downgrading from v2.2.0 possible (Unreleased as of now) An asterisk in the changelog adds that the version number 2.1.2 has been skipped over entirely because it was already being used for an internal testing version. If you’re using unc0ver v2.1.1 at the time of this writing, then you’re advised to update to unc0ver v2.1.3 at your earliest convenience to ensure you have all the latest bug fixes and improvements. Just as with previous versions of the unc0ver jailbreak tool, version 2.1.3 is available for download from Pwn20wnd’s official GitHub repository. Unc0ver is a semi-tethered jailbreak just like Electra, which means you must re-run the tool after every reboot. That aside, it bundles an iOS 11-optimized build of Cydia that sports the official seal of approval from Saurik himself. Have you downloaded unc0ver v2.1.3 yet? Let us know in the comments below.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00601.warc.gz
CC-MAIN-2024-18
1,679
13
https://www.fi.freelancer.com/projects/mobile-phone/develop-app-available-both-ios/?ngsw-bypass=&w=f
code
We are requesting a quote for development services of a mobile app available on both iOS and Android. Version 1 was released in Swift language on iOS in Jan 2021. It has been redesigned in detail, and we need a developer to code the updated design. This request is purely for development resource, see below for the roles already filled (i.e. Testers). Version 1 of our app exists in the App Store only (it is now hidden from users) Front end: Swift Native Code repo: Gitlab Version 2 to be released on App Store. Version 2 to be released on Play Store. Detailed wireframe design: Figma (Complete) Detailed functional stories: Trello (Complete) Please provide your email address when responding and we will grant access. Mobile App Developer App Store deployment - by the 15th November 2021 Play Store deployment - by the 30th November 2021 Other developers have estimated 520 hours following detailed review A. Please provide your hourly rate for this project. B. Provide any references to applications which utilise APIs to search 3rd party content providers.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00396.warc.gz
CC-MAIN-2021-39
1,061
18