url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
http://smichea.free.fr/GAP2/links.html
code
There will be a $100 registration fee. Local expenses in China will be covered. (including lodging, meals and transfer between Beijing and Kaifeng) However, due to the limit of our funding, we are unable to provide any travel expenses. The deadline for registration is November 15, 2003. For those marked C* and those who got a late invitation, please confirm / reply ASAP As you may know, you need a visa to enter China. Normally a visa is valid for three months from the date of issue. We will send you a formal invitation letter, which you need to bring with you to the closest Chinese consulate when asking your visa. To help us, could you please send your affiliation and mailing address as soon as possible?
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00259.warc.gz
CC-MAIN-2021-25
713
5
https://share.cocalc.com/share/4310dd3fa406f86eb6972757b297fcc17e0f2986/spectral-radius-of-graph.ipynb?viewer=share
code
Gordon reports an issue at David opens a ticket at Vincent fixes the issue. David reviews the fix. The fix is merged in Sage 8.4.beta4. Since CoCalc has the latest development release of SageMath, let's test this! We open a Jupyter notebook in CoCalc and select the "SageMath (development)" kernel for Jupyter from the Kernel menu. g = graphs.CompleteBipartiteGraph(1, 3)
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898577.79/warc/CC-MAIN-20201028132718-20201028162718-00516.warc.gz
CC-MAIN-2020-45
371
6
https://www.tutorsglobe.com/question/define-leadership-accurately-reflects-style-of-leadership-524432.aspx
code
1. What kind of leadership best describes you? 2. After studying leadership this semester, would the way you define leadership change? If so, in what way? If not, why not? 3. Do you agree that that how you define leadership accurately reflects your style of leadership? If yes, why? If not, why not? 4. Choose one style of leadership, or one theory of leadership, and describe it.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00103.warc.gz
CC-MAIN-2023-14
380
4
https://rm2030.cme-congresses.com/poster-guidelines/
code
Requirements for Poster Presentations The dimensions of the poster board are 90 cm wide and 120 cm high (view sample image below). Allocate the top of the poster for the title and authors’ names and affiliations as stated in the submitted abstract. The text and illustrations should be bold and large enough to read from a distance of two meters (six feet). Tape and technical equipment for the hanging of posters will be available, as well as assistance to help you mount your poster in the poster presentation area. The Organizing Committee will not be responsible for posters that are not removed on time. Sample of Poster:
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00876.warc.gz
CC-MAIN-2023-50
628
7
https://community.spiceworks.com/blogs/community-release-notes/2907-release-notes-january-5th-2018
code
Happy New Year, SpiceHeads! We're kicking things off with an early site update to grab a couple updates we finished in December but didn't have a chance to push out. - Email: Fixed a bug where email subscriptions were only sent to the first group you tagged on a topic. - Navigation: "Infinite scroll" areas below site content have been removed. With today's update, they should be gone from Topics, How-tos, and Scripts. You can still click "Load More" in order to see more articles if you wish, but now the footer of the site should be accessible to everyone! - Subscribed: Fixed a bug where blog posts in your 'subscribed' list (on your profile page) wouldn't take you to the latest unread reply. - Topics: Fixed a bug where the prompt to "help out the OP and earn points" was showing up for Discussions instead of just for Questions, as intended.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00724.warc.gz
CC-MAIN-2020-40
850
5
https://maui-north.ca/product/km-hawaii-bamboo-cruiser/
code
KM’s Bamboo Cruiser series of boards was created to provide a cross-over option for the stand up paddler who enjoys both the flat water and the occasional swell. Available in KM’s beautiful and dynamic color schemes, the bamboo layer creates stunning aesthetics, and makes for some true eye-candy in and out of the water. In the water, the Cruiser’s double concave bottom maximizes surfing performance, and gives it a true longboard feel. To ensure you never forget that perfect wave or sunrise paddle, the Cruiser features flush FCS mount plug so you can mount your GoPro right on the nose and create your own memories. |10’6″||30″||4 1/4″||170||2 + 1| |11’0″||31″||4 1/2″||195||2 + 1| |11’6″||32″||4 3/4″||220||2 + 1|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100909.82/warc/CC-MAIN-20231209103523-20231209133523-00065.warc.gz
CC-MAIN-2023-50
749
4
https://www.vortak.net/rtx-3060-gamers-vs-miners_d5780cb46.html
code
My review of the Nvidia RTX 3060. The cheapest RTX 30 series GPU from Nvidia. Linus Video on the Hashrate nerf - https://youtu.be/XfIibTBaoMM If you want to support the channel, consider a Dave2D membership by clicking the “Join” button above! Purchases made from store links may give me some money. (It doesn’t cost you extra, so please buy everything)
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067400.24/warc/CC-MAIN-20210412113508-20210412143508-00612.warc.gz
CC-MAIN-2021-17
359
5
https://www.fiskerbuzz.com/forums/325781-post5.html
code
I seam to remember working with Andrew Shewn (spelling) and he had some software that could reprogram ether the VCM or the keys to work together. Maybe things have changed from 2013 days. Hopefully this info helps. Originally Posted by PowerSource If it doesn't come with keys then the VCM is useless
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313803.9/warc/CC-MAIN-20190818104019-20190818130019-00548.warc.gz
CC-MAIN-2019-35
300
3
https://fabulagifts.com/products/monte-pythons-flying-sticky-notes
code
Monte Pythons Flying Sticky Notes Save 0% Save % Original price $8.99 - Original price $8.99 $8.99 - $8.99 Current price $8.99 And now for something completely sticky! This sticky note book features classic Python iconography. You’ll need them to remember what time you’re scheduled to work at the Argument Clinic and to keep your place while grading Philosophy Department term papers at the University of Woolloomooloo.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652959.43/warc/CC-MAIN-20230606150510-20230606180510-00726.warc.gz
CC-MAIN-2023-23
424
6
https://subscription.packtpub.com/book/programming/9781784398026/2/ch02lvl1sec18/scene-builder
code
For most complex and sophisticated UI requirements, wouldn't it be easier for designers to use a tool to design their UI with a WYSIWYG interface, without writing any code, and then load the result ( FXML file) into their JavaFX application logic? Therefore, you need JavaFX Scene Builder; it is a visual layout tool that allows you to easily lay out your UI controls so that you can quickly prototype your application with effects and animations. Scene Builder (version 2.0 upwards) is the compatible version for JavaFX 8. At any time during the creation of your project, you can preview your work to check its real appearance before deploying it. It is open source and therefore it integrates with most IDEs, but more tightly with the NetBeans IDE. It is also a cross-platform, self-contained application that runs on most platforms. In addition to supporting CSS, it allows you to easily apply custom theming to your prototype.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100942.92/warc/CC-MAIN-20231209170619-20231209200619-00104.warc.gz
CC-MAIN-2023-50
930
6
https://ardc.edu.au/project/catching-oz-waves/
code
Wave dynamics are critical to ocean industries, coastal development and leisure activities. This project will develop a national data asset of Australian in-situ wave observations and facilitate improved data delivery to national and international stakeholders. The project will achieve these goals by increasing the amount of publicly available data for real-time and delayed mode data-streams; reviewing, streamlining and documenting the governance and end-to-end data workflow, allowing submission of data by an expanding range of providers; and developing best practices and procedures for quality control, metadata and data formats for legacy and emerging wave measurement systems. Who is this project for? - Government agencies - Research organisations - Commercial marine organisations What does this project enable? This project will incorporate wave data across multiple organisations and sectors with national coverage in terms of spatial extent (with data provided from every State and the Northern Territory). The implementation of governance and key processes, metadata standards and other guidance will enable new partnerships, extending data ingest to further networks or additional buoys. The development of this data asset will provide a foundation for additional data contributions from the deployment of wave buoys in new locations. - Integrated Marine Observing System’s Australian Ocean Data Network (AODN) Portal - Full project title: Development of a National Infrastructure for in-situ wave observations - Project id: https://doi.org/10.47486/DP748
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00156.warc.gz
CC-MAIN-2022-05
1,574
11
https://community.intel.com/t5/Developing-Games-on-Intel/Wavy-artifacts-on-H-264-video/td-p/1032790
code
We have an issue with H.264 video decoding using Intel QuickSync Decoder. I don't want to say that the problem is in decoder (we don't know its location) but maybe you have already seen this kind of artifacts and can suggest what the source of problem can be. Uncorrect usage / wrong values in what structures can lead to that kind of artifacts? Our software use the following chain of libraries for decoding process: - ffmpeg library on the top; - DXVA interface used inside ffmpeg; - Intel decoder under DXVA. There are four screenshots in the attachments - three are with artifacts and one (the same frame with first artifact screenshot) without them. Waves in the decoded image appears/changes in the key frames and stay unchanged during the GOP. Artifacts appearing does not depend neither on driver version nor on graphics kernel version. It appears significantly more often in the debug build. We know only one case they appears in the release version. That’s all information we have for now. thanks in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510676.40/warc/CC-MAIN-20230930113949-20230930143949-00354.warc.gz
CC-MAIN-2023-40
1,020
7
https://georgemanakanatas.github.io/
code
Growing up in a STEM oriented environment gave me a love for science and technology from an early age. When the time came to look past high school I decided to purse a bachelor's degree in Physics. It was a good compromise between the field of engineering that was too practical for my taste and the more theoretical alternatives. Managing at the same time to tackle both mundane as well as exotic topics and make them approachable. After graduation I worked as a Physics and Mathematics tutor for several years. However at some point I decided that I wanted to change my career. After trying and failing to reconcile my work schedule with the time requirements of various master's programs in various universities I found the perfect match in the Hellenic Open University. Allowing me to pursue my interest in parallel with my work. In 2015 after finishing my Master's in Information Systems I subsequently found work as a developer and software engineer for a consulting firm in Belgium. The next three and a half years where an interesting and unique mix of acquiring applied and theoretical knowledge on the job in a wide array of tools and technologies. Driven by the diverse needs of both the client base and the company projects in combination with the move to a new country forced a steep learning curve. The personal and professional growth however was worth the effort and the experience is one that I do not regret at all. 2018 Antwerp, BE 0030 6977 13 82 53 Master's Degree in Information Systems• 2012 - 2015 Curriculum: Information and Communication Technologies. Main subjects: Computer Networks and Communications, Computer Architecture,Software Engineering, Software Design and Management Processing. Thesis title: ”Blocking Inference Channels in Outsourced Data Mining.” Relators: Prof. Ilias Stauropoulos, Prof. Basilios Berikios Bachelor's Degree in Physics• 2000 - 2009 Curriculum: Physics and Mathematics. Main subjects: Quantum Physics, Atmospheric Physics, Introduction to Circuit theory, Applied Geophysics, Elements of Electronics, Introduction to Optoelectronics, Financial Mathematics, Computational Physics Software Architect • November 2019 - today As software architect for the data science hub at VITO my work focuses on the following: Designing specifications for our new projects or identifying areas that need improvement in our existing ones and providing realistic schedules for implementing them. Providing solutions to the unique problems that a research organization faces is rarely something that tis possible with existing software. Therefore, out of the box thinking for changes to existing software or custom solution development are often mandatory. Working with my colleagues in the Data Science Hub to provide training in the tools and technologies that are quickly becoming requirements for researchers. As the volume of data that researchers have to work with increases and the push for increasing efficiency intensifies there is an ever growing need to automate as much of the work as possible. Identifying the sweet spot of the knowledge that researchers need to acquire as well as a clear and simple path to do so is key in supporting the organization’s digitization efforts. Software Engineer / Consultant• April 2016 - October 2019 As a software engineer in Apogado I focused mostly on back end development and more specifically the following: Expanding and modifying the eGov connector SOA product. Running on the Talend ESB but relying only on Java and open source Apache components to avoid vendor lock in. As well as small ad hoc development for clients as the need arode and Unit ,API testing for all projects. As a consultant I mostly focus on evaluating client processes and software with an emphasis on overall architecture, security, user experience and support / documentation. I also do small scale best effort scripting development or infrastructure as code when needed to meet any pressing requirements. Master's Thesis• September 2014 - September 2015 Knowledge hiding and Association rule hiding, hiding frequent itemsets, exact knowledge hiding through database extension, k-anonnymity, Rob frugal anonymisation, Apriori data mining algorithm. Python programming, tkinter GUI package, LaTeX text editor. Physics & Mathematics tutor• September 2004 - September 2015 Physics and Mathematics tutoring for high school and junior high school students. Advanced Physics and Mathematics classes for College students, preparatory courses for end of semester examinations. Part-time collaboration• September 2010 - September 2015 Giorgos Siotos Tutoring Center: 09/2010-09/2011 Physics and Mathematics tutoring for High School and Junior High School students. Aenaon Tutoring Center: 09/2011-09/2014 Physics and Mathematics tutoring for High School and Junior High School students. Greenpeace & BFS Group• March 2006 - November 2007 BFS Group: 09/2007-11/2007 Direct Marketing, Customer Service, Portfolio Management, Database Management. Greenpeace: 03/2006-05/2006 Direct Dialogue Campaigner/Fundraiser. My story so far.Georgios Manakanatas Physicist! Teacher! Software Engineer! Greek! It is widely believed that modern workers will need to change multiple careers during their working lives. Being a scientist I had to validate the hypothesis by experiment, and since no one I knew seemed to be prepared to volunteer to be my test subject that left me with only one option: I would be my own guinea pig! So a few years ago I set out to start a new career and see how things would go, for science of course! My background was a diploma in Physics from the University of Crete. My first profesional steps where as a tutor of Physics and Mathematics working both freelance and in collaboration with various tutoring centers in Heraclion and Athens in Greece. In paralel however I had always been involved with compluters as a hobby and when time came to switch careers this interest allong with my good mathematical background from my diploma made the IT field an easy choice After some failed starts trying to find a part time master's cource that would not clash with my working schedule I found the perfect match in the Hellenic Open University. Three years lots of coffee and long nights later I was armed with an M.Sc. in Information Science, knowledge of Software Design and Management Processing, basic coding skills as well as data mining methods and tools from my Thesis. With these I started testing the hypothesis that someone would actually pay me to apply these new skills in practice. Predictably the opportunity presented itself in a manner consistent with murphy's law. The job I found was one in data integration put none of my fledgling skills in data mining, python and C to use and required a completely different set presenting a new and steep learning curve. Familiarizing myself with a new set of middleware frameworks and tools as well as new languages on the job was often hectic and always stressful but after a few years of working in the field I am now confident in my abilities. While it's too early to tell the experiment seems to be going well. I have moved on to work as a software architect at VITO providing me with new opportunities and challenging me to put all the skills I have developed so far to the test! Born and raised in Cholargos, a suburb of Athnes Greece. Located in a kind of urban goldilocks zone it's close enough to the city center to make the "city life" easily accessible while being far away to not be really downtown. Like many places in Greece it has a high population density (rivaling that of Tokyo) while at the same time feeling friendly and uncrowded.Hometown After school I moved to the city of Heraklion Crete to study physics at the University of Crete. I fell in love with the port city as well as the excellent university and will carry a piece of it with me wherever I go. There I also made my first professional steps as a tutor before circumstances forced me to leave the island and return to Athens.Studies, early work Moved to my third port city, Antwerp Belgium, in 2016 for work to start a new career as a software engineer. I was fortunate to find a place to rent in dazzling Zurenborg and in the past year have come to love both the city and it's people.Work, Career change Arta is a beautiful small city in western Greece that is near and dear to my heart. It is the birthplace of my parents, home to my extended family and a treasure chest of happy childhood memories.Extended family Ever since my father taught me chess as a young boy I have always enjoyed the game and try to play on a regular basis online or in person. While I never studied chess in a systematic manner in order to improve my game, my recent introduction to the fascinating game of Go, has renewed my interest in my first love as well and now I am looking to find books and clubs for both.Chess, Go, Board games As a movie buff I enjoy watching films both at home and as often as I can at the movie theater. One of the things I miss the most from back home are the open air cinemas of Greece.Film, Movies, Open air cinema A passion that started in my early teens and is still with me to this day is reading any and all science fiction books I can get my hands on. All other literary genres are welcome and apreciated as well, but even crime fiction that is my second favorite is a very distant second.Books, Sci fi, Reading Being Greek I of course can't help but love our favorite passtime that is to meet friends at a cafe and have loud and animated conversations, usually over politics, while enjoying a good cup of coffee. A habit that often has people from northern Europe, for reasons that are beyond my comprehension, half expecting an actual fight to break out in the middle of our pleasant and friendly discussion. ;) Living in Belgium for a few years now I caught the biking bug. While nowere near as cool (or fit) as the pictures here whould suggest, I am the proud owner of a good second had roadbike. Slowly working on both my fittness level and ever longer trips in this very conveniently flat part of the world.Bikes, Fitness Another of the hobbies that I have managed to pick up is playing around with electronics. From switching to a mechanical keyboard, parts of which I seem to want to desolder every other Tuesday. To trying to get as much functionality as I can out of my Raspberry Pi. I tend to waste far more time than would be advisable on these small side projects.Electronics, Mech keyboards, Raspberry Pi
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00485.warc.gz
CC-MAIN-2020-40
10,562
53
https://www.warriorforum.com/main-internet-marketing-discussion-forum/293466-newbie-outside-english-speaking-countries.html
code
I've known this famous forum but finally registered, thank you for checking this I know your time are precious, I have specific problems that almost no one talks about, to explain it I need some paragraphs.. so I use colored font to emphasize my situation.. thank you.. I don't live in US, UK, Canada, India, Australia, New Zealand, Singapore, those English speaking countries basic: Several years ago I was trying to figure a way to get a better life, and found marketing might be the answer instead of investing in stocks I quitted my job last year, but now I still have not reached monthly income the job provided me, mainly I studied Dan Kennedy, Ted Nicholas and some other internet marketers, I have a paradox problem, but first I'd like to ask 1) if you do not live in English-speaking countries.. I am not even living in India or Singapore.. I live in Taiwan.. people speak traditional chinese here.. I've seriously considered started affiliate marketing, but several problems, like clickbank does not support our currency, paydotcom doesn't support our currency, and many software or plugin are English language based, I do not have much budget, if I have I can probably hire someone to translate those into our language (I can do it myself but it costs a lot of time), so those PLR.. those clickbank products.. public domains.. almost don't fit my situation (at least I think so) We here even don't have Ezinearticles.com this kind of site, I just do article marketing on my own blog, to name some situation others may not experience, bascially here in Taiwan people link internet marketing as.. - SEO services that charges a lot of money - next facebook (platform-based start-up) I just want to ask.. if anyone applied interent markeing in non-English countries? 2) I think the niche I have passion is marketing and football/soccer, but not many people play soccer here (fewer than baseball/basketball), I think the marketing niche is good for me but the paradox... can you start your niche in marketing if you studied much in marketing but got no good record? do people believe you? I've managed to build a small list and sold my own info product (about Email marketing that costs me a lot of time to develop.. as I said I could not find proper affiliate products to sell) because I think marketing is very result based.. so I want to ask if anyone started as nobody in marketing niche? I was thinking to position myself as a reporter but still think that's not enough... by the way I am 29 years old local male, had one job but quitted it, no business experience, not a techie person, been studied marketing for 2+ years, my parents are not business men (they believe a stable job is better), thank you very much
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00341.warc.gz
CC-MAIN-2021-49
2,725
20
https://users.rust-lang.org/t/in-memory-compilation-and-execution/30080
code
I have to dynamically build small FSM automata, then run them within the same program. I have no concern about the resource it takes to build them. But I want them to be as efficient as possible when they run. Since each FSM can be fully determined (number of states, transition, messages signatures, etc). My guess is that the best way to achieve this tradeoff is to produce ad-hoc Rust code for each FSM (with hardcoded states and transitions), compile it down to optimized binary with rustc then run it as subprocess. How difficult would it be? How much more difficult would it be if I want to avoid writing source files and binaries to the disk? I guess this would require “in-memory compilation”, with the source code held by a string and the compiled code written to an… executable bytes array? Or maybe a bytes array wrapped into a function or into a trait object? Is this possible? Why and how?
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528687.63/warc/CC-MAIN-20190723022935-20190723044935-00349.warc.gz
CC-MAIN-2019-30
908
5
http://www.backtrack-linux.org/forums/printthread.php?t=41748&pp=10&page=1
code
metasploit and nessus scans hey all, this is my 3 ro 4 post, and the question that is bugging me is this, excuse me if it might be a dumb one, so far i have installed BT 5 on my netbook, and got the database working with msf, thanks to sickness (that helped alot btw) the question is, once getting nessus to work with msf via "load nessus" if i do a db_nmap xxx.xxx.xxx.xx will it use both msf and nessus against the xxx.xxx.xxx.xx range? or do i have to run the separably and then import them into the database.... thanks for any feed back
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00619-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
540
4
https://www.exascaleproject.org/modernizing-computing-to-usher-in-the-next-generation-of-scientific-discoveries/
code
Modernizing Computing to Usher in the Next Generation of Scientific Discoveries By Pat McCormick Exascale can mean different things based on your experience and the role you play in taking our computing capabilities to the next level. In the past, I was fortunate enough to see the Exascale Computing Project (ECP) as part of the initial leadership team. Today, as a technical project leader for three separate efforts, I see ECP from a similar yet different perspective. Among each of the projects I lead, there is a common theme of modernization. However, each project follows a different approach to achieving this goal by pushing boundaries and developing new technologies while also providing a stable future foundation for “tried-and-true” capabilities long established in the high-performance computing community. For example, the Flang project is actively working to provide a modern, open-source implementation of the compiler infrastructure for the Fortran programming language—a language with roots that go back to 1953 when John Backus proposed it as an alternative to assembly language. It is hard to argue that any other programming language has had a more significant impact on computational science. This effort will help ensure that Fortran can continue its long-established role well into the exascale era of computing. In contrast, the Legion project, which recently won an R&D 100 Award, aims to provide a new data-centric programming system explicitly targeted for the exascale generation of distributed memory and heterogeneous architectures. While Legion delivers a foundation for many forms of computing, it has most recently provided the foundation for accelerating the training of deep neural networks (DNNs) for machine learning workloads. The results have already delivered performance and scalability not accomplished using industry-established frameworks such as Keras and PyTorch and a promising capability for running large-scale problems on DOE’s upcoming exascale platforms. Finally, in the Kitsune project, we are extending the foundation of today’s compiler infrastructure to become more aware of parallel applications and be more effectively analyzed and optimized. Today’s compilers reflect the evolution of technologies that followed hand-in-hand with Moore’s Law and Dennard scaling. This design makes them very good at optimizing sequential programs, but they rarely capture a programming paradigm–independent representation of the parallel semantics within an application. Our efforts aim to capture this parallelism and provide additional optimizations and improve performance portability across different architectures and underlying software mechanisms. Ideally, our efforts will enable a more productive suite of software development tools for the next several generations of cutting-edge scientific applications. Regardless of these projects’ different technical viewpoints, the overarching goal to modernize aspects of our approach to computing remains intact. In my mind, exascale is not a predetermined set of guidelines but instead a mindset and software capability that should enable creative uses of exascale-class computing platforms to help usher in the next generation of scientific discoveries.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649741.26/warc/CC-MAIN-20230604093242-20230604123242-00669.warc.gz
CC-MAIN-2023-23
3,269
7
https://isabelle-augenstein.medium.com/increasing-well-being-in-academia-97f3ebc1599f
code
Academia is far from the best work environment from a mental health perspective. From reports on high suicide rates among PhD students to burn-out of academics, especially women, the statistics make for rather grim reading. This is no wonder, as academics are under a lot of pressure to produce high-quality research and teaching, while juggling grant writing, service and more, and compete with peers in their respective fields worldwide. Is it all bad news, and are there ways we could increase work satisfaction and lower stress? I’ve recently spent quite a bit of time reflecting on how we, as individual academics — but also as part of academic institutions, or of research communities as a whole — could apply research findings from positive psychology to increase well-being in academia. I thought I’d share some of these ideas, in case this is useful to anyone else. Caveat: I am a researcher in Natural Language Processing and Machine Learning. I dabble a little in psychology — my minor during my undergraduate study at Heidelberg University was psychology, and I’ve recently joined a new project on using Natural Language Processing for measuring happiness and well-being (with psychologist Oscar Kjell and computational social scientist Andrew Schwartz). For this reason, I’ve been brushing up on the ‘well-being’ literature. I can really recommend the Coursera course on “The Science of Well-Being” for those who are looking for more reading and practical tips on the topic. However, I am far from an expert on this topic, so please take the following with a grain of salt. Note also that I’ll be using the terms ‘happiness’ and ‘well-being’ interchangeably, and without going into detail of how these were measured. There are many instruments for measuring these concepts, and occasionally, findings obtained with different instruments contradict one another. This post therefore restricts its discussion to findings which have been shown to hold repeatedly. I’ll very briefly summarise findings on well-being below, following which I’ll share some ideas on how these ideas could be applied to academia. Of course there are many more variables, but some of these are outside of our control. What does the literature say on… … what makes us happy? Strong effects are reported for: - engaging in activities that lead to our being in a “flow state”…
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.96/warc/CC-MAIN-20230924223409-20230925013409-00014.warc.gz
CC-MAIN-2023-40
2,406
9
http://catalogablog.blogspot.com/2004/01/open-source-tool.html
code
Thursday, January 22, 2004 pyCatalog is a Python, MySQL, wxPython, Reportlab application specifically usable in library and information centers. It simply produces book catalog and card catalog in pdf format rendered using reportlab. The program takes MARC file as its source data. It is platform independent, it will run on Windows, Linux and Mac operating system.Seen on oss4lib.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191353.5/warc/CC-MAIN-20170322212951-00383-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
381
2
http://cart.crotsoftware.com/
code
Easy to setup plagiarism detection engine Start here for free - CrotPro allows you to find if the submitted text was copied from the following sources: - English Wikipedia. - Studies show that 10-20% of plagiarized papers are coming from Wikipedia. - More sources to come! - Any public web source can be added on demand. - The service does not keep submitted documents. Once detection results are checked out the submission will be shredded. - The service is free with the limited number of submission. - Advanced subscription levels up to dedicated plagiarism detection engine!
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.56/warc/CC-MAIN-20170322212949-00443-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
578
10
https://www.g2.com/products/station/reviews
code
What do you like best? 1) Variety of "integrations": in my case, I've not missed any app/service integration. Everything I use in my work/personal workflows is available in Station 2) Performance: it's really notable how memory usage varies significantly between, for instance, Chrome with a bunch of tabs and Station. I still need to validate in "worst-case scenarios" in Station where I'd have several tabs for several apps (e.g. Google docs), but so far it's been better on Station 3) Ease of use: adding apps is really a straightforward process. Removing them not so much but it's still a fairly simple process. I've not played much with more than 2 accounts for the same app so need to check that too What do you dislike? - Not exactly dislike, but multiple account managing could be easier, as so could be some admin stuff, like resetting Station to a fresh install (the current process is really easy, but maybe not for all types of users). - One minor annoyance is that with Okta, I get a: "Important: 5 of your apps require the Okta browser plugin" message and cannot really do anything about it as trying to install the plugin does nothing. - Another very minor thing has to do with maximizing the Station window using the Option key (Mac); it always makes it full-screen, as opposed to expanding to fit within the screen real state Recommendations to others considering the product Try Franz, it's what I used before Station, and compare it to Station. Find ways to measure the performance of each but also the performance of your computer when NOT using either app. See how each "feels" in your workflow (personal and work-related) What business problems are you solving with the product? What benefits have you realized? 1) Navigation: too many tabs, too many apps, too many desktops, apps to manage the arrangement of windows... 2) Performance: memory usage, above all 3) Consolidated Workflow (Work & Personal): especially with email, chat/video call apps
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721468.57/warc/CC-MAIN-20190425134058-20190425160058-00255.warc.gz
CC-MAIN-2019-18
1,970
14
https://faq.veryfi.com/en/articles/2946860-how-to-change-date-format-default-currency-and-timezone
code
Go here https://hub.veryfi.com/me/?type=profile (you may need to login if not already logged into Veryfi Hub web app) Under "Profile Settings" you can change your Default Currency, Default Time Zone and/or Overwrite Date Format as shown below. Press SAVE (blue button) further down on the page. Veryfi Hub Settings allows you to change the format of dates, your default currency and timezone Updated over a week ago
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00437.warc.gz
CC-MAIN-2024-10
415
5
http://forums.theregister.co.uk/forum/1/2011/08/15/android_gpl/
code
Every manufacturer using Android is in breach of the GPL, according to IP attorney Edward Naughton, though his last accusations didn't exactly run Android out of town. Last time it was a complex argument about how effectively Google had cleaned GPLv2-licensed header files, but this time the argument isn't so esoteric and could, … Why do you lot reprint bullshit from Edward Naughon? He's a Microsoft lawyer trying to stir up some FUD. And you are assisting him in that aim. > The GPLv2 is pretty explicit that anyone failing to distribute source is in breach, The GPLv2 requires *either* that source be distributed with the binaries (section 3(a) ) *or* that a written offer valid for 3 years to distribute that source accompany those binaries. It does not matter that source is not immediately available - Naughton is just bullshitting. Again. GPLv3 has very similar clauses in Section 6. Come on, Bill, at least make a token effort towards journalism. Like readnig the licence you're claiming to be writing about. Reading is fundamental. Did you happen to follow the link: The point is either you ship the source, or make it available. So if you don't ship it, you can easily set up a site for distribution. Note this site isn't Naughton or Florian but set up by some guy named Mathew. I would suggest you learn more about the law (tort and contract law) before you cry foul. In general I hate lawyers as much as most people here, but I do respect the law. Here we go again... > Reading is fundamental. It most assuredly is. > The point is either you ship the source, or make it available. I thought you said reading was fundamental? Here's section 3(b) of GPLv2 :- "b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange;" Show me where in that it says that you have to "ship the source, or make it available"... > So if you don't ship it, you can easily set up a site for distribution. You can. I would recommend any GPL distributors to do exactly that - it is by far the simplest method of compliance. But not doing so does *not* mean you are non-compliant - read the licence excert above to find out why. > I would suggest you learn more about the law And I would recommend exactly the same for you. As I did the last time you held forth on a subject you didn't understand. "Show me where in that it says that you have to 'ship the source, or make it available'..." IANLA, but what Mr. Gumby (Hello!) wrote seems like an accurate paraphrasing of section 3 to me: 3. You may copy and distribute the Program [...] provided that you also do one of the following: 3(a) - Accompany it with the complete corresponding machine-readable source code [...]; or, 3(b) - Accompany it with a written offer [to provide a machine readable version of the source code]; or, 3(c) - Accompany it with the information you received as to the offer to distribute corresponding source code [for noncommercial distributions]. That much is apparent... > what Mr. Gumby (Hello!) wrote seems like an accurate paraphrasing of section 3 to me: It is not. > 3(b) - Accompany it with a written offer This is the point: 3(b) requires a *written offer* to produce source on demand. It does not require the immediate production of source - it requires a *written offer* to produce it. Thus 3(b) distribution does not require any source to be transmitted until and unless some asks for it. This is why Naughton's arguments are such utter bollocks - failing to put source up for download is not non-compliance; failing to produce it should someone ask for it would be. Now it is clear that by far the simplest way of ensuring compliance is just to stick it on a web site, but that doesn't mean that a more labour-intensive mmethod cannot *also* be compliant, should someone be suitably moronic to want to do it that way. 'Some guy named Matthew'... ...could do with a bit better attribution, both here and in El Reg. He's a member of the Fedora engineering steering committee and a significant contributor to both Fedora and the upstream kernel. He's not exactly just an 'enthusiast', as El Reg puts it. Of course, Matthew focuses his GPL compliance efforts on productively engaging with infringing companies, rather than self-publicizing, so it's not surprising that lazy journalists are less clued in about his stuff than about the garbage Edward Naughton puts out. Theoretically you can ship someone the entire source code on paper and be in compliance. "Show me where in that it says that you have to "ship the source, or make it available"... " It doesn't do you any good if you make statements and clip only sub sections of the code. You can see the entire GPLv2 here: In the preamble it says the following: "To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. " While the preamble isn't the exact T's and C's, it does however show the clear intent on the burden of redistributing the source code or making it available. If we go in to the T's and C's section 3 as a whole it says the following: 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) As you can clearly see that section A says to ship the source code with the distribution. Section B says if you don't ship the source code with the distribution you have to provide in writing an offer to provide the source code at their request for free. (This offer is good for 3 years and you can charge for S&H) So as you can see, RIF. BTW, I thought I was being clear that if you don't ship the source, but provide a link to your site where you host the source (for free) and document it, you would be in compliance. In fact that is in part of what the author of the list of tablets did do when he indicated their compliance. Unless your computer can read and compile the code from paper, then the answer is no. While I admit to not being a lawyer, that doesn't mean I can't read a contract. ;-) The reason I published the entire clause #3 is that you're again misinterpreting what you're reading. Paragraph 3 has 3 sub clauses. The person distributing the code has to chose 1 of the 3 options. A) You ship the code with your distribution. B) You provide in writing an offer to produce the source code on request. C) An option only reserved for non-commercial use of the code, but is based on your source being one from a source that exercised option B. (You should read it to understand it.) Now you're suggesting that if someone makes a written request for the source code, you don't have to be responsive. That is not correct. I'm not suggesting that when you send a request that they ship your the source overnight via fed ex. The reasonableness is subjective and if you have the time to make the request and then sue, clearly they are not being responsive. With respect to Naughton, It appears that these companies are not in compliance with sub clause A, B and since these are commercial enterprises C isn't an option. The point you apparently are missing is that if you ship a product, you have to ship the code or a written offer. Having it available on your website for download would appear to satisfy sub clause A. Not having it available on your website, and not providing a written offer violates both sub clauses A and B. Therefore these sites are not in compliance and are open to litigation. And lets be clear. Just responding to a request for the source code isn't enough if you don't actually make the written offer to do so. You are still not in compliance with clause B. Clearly reading and understanding what is written in a contract is not your strong suit. Did the vendors include a written offer to provide a machine readable copy of the corresponding code? If not, a subsection b defense would also be invalid, and they are in non-compliance. Section 3 subsection b is quite clear on the matter, the offer MUST accompany the distribution. The offer MUST be made and it MUST be EXPLICIT. The problem in a subsection b defense isn't the time frame for providing the code, it's if no offer accompanied the binaries. Of course, this is the exact type of reason many vendors prefer BSD. > It doesn't do you any good if you makestatements and clip only sub sections of the code. It doesn't do you any good to quote largepassages and ascribe to them meaning whch simply is not there. You are making his up. > In the preamble it says the following Where in what you have quoted does it say what you claim? You are making up meaning which simply is not within the text you are quoting. Next, you quote section 3. What bearing do clauses 3(a) or 3(c) have on any of this? We were discussing 3(b) distrbution, so you are simply quoting extraneous information. This would appear to bear the hallmarks of aweak argument. But let's look at 3(b) because that *is* relevant. It requirs a *written offer* of source. Look through the words -a *written offer*. See if you can find anything in that clause that requires more than a *written offer* of source - such requirements are conspicuous by their absence. Your repeated attempts to claim more than is in the licence are simply not backed up by the text that is there. 3(b) does not requre you to post code until someone asks for it, and claims that a failure to have a download site equates to non-compliance are simply wrong. The licence just does not say what you claim, however repeatedly or verbosely you claim it. > Section B says ... you have to provide in writing an offer to provide the source code Finally, you've said something correct. But note that it does NOT say that you have to post any code unless you are asked for it; this clause does not support your argument, just like it doesn't support Naughton's. > So as you can see, RIF. Indeed it is. Please do read wha is there in the licene, rather than just relying on the voices in yer heid again. > BTW, I thought I was being clear that if you don't ship the source, but provide a link to your site where you host the source (for free) and document it, you would be in compliance. No-one has argued any different. But what you haverepeaedly missed - as did Naughton - is that if you *don't* ship the source and you *don't* provide a download link, you can still be in compliance, because the licence does not require that link, it requires a *written offer* to supply source on demand, not before. Please go and do the reading you exhort others to do; your comprehension skills appear to be somewhat lacking. You're making a lot of fuss about a written offer making the software compliant. Could you please define written in legal terms? I think you'll find that written needs to be literally just that, eye readable on a piece of paper. In which case any software distributed electronically cannot comply with GPL2, not even if it has a file for you to print out, unless the source code is available either with the distrubution or over the net. > While I admit to not being a lawyer, that doesn't mean I can't read a contract. ;-) I beg to differ. > The reason I published the entire clause #3 is that you're again > misinterpreting what you're reading. Well, one of us is. Let's see how this pans out, shall we? > Paragraph 3 has 3 sub clauses. The person distributing the code has to chose > 1 of the 3 options. Yes. Right so far One of the three. 3(a) and 3(c) are not relevant to ther situation - as I've already pointed out - so repeating them is jsut excess verbiage; it serves no purpose whatsoever, as we're not talking about 3(a) distribution, and nor are we talking about 3(c) distribution. One can only wonder at your reasons for thinking either 3(a) or 3(c) relevant to a discussion about 3(b). You can read a contract, can you not? > B) You provide in writing an offer to produce the source code on request Correct. Now look at what you have written: section 3(b) compliance requires *** A WRITTEN OFFER *** to produce source code *** ON REQUEST ***. See if you can spot, anywhere within section 3(b) a requirement to do anything more. Got one? Of course you haven't, because it isn't there. Section 3(b) requires the distributor to supply *** A WRITTEN OFFER *** to supply code on request. I does *not* require the distributor to supply code straight away - only on request. If you think this wrong, show the words that prove it. Got any? Of course you haven't, because they're not there. > Now you're suggesting that if someone makes a written request for the > source code, you don't have to be responsive. I am suggesting nothing of the sort. If you'd read what I actually posted - rather than what you imagine I did - you'll see that nowhere have I even intimated that it is acceptable to be unresponsive to requests for code under the written offer made during a section 3(b) distribution. What I have said - what I am still saying - is that the offer does not need to be fulfilled until and unless a request for the code is made. Thus claiming that a distributor is non-compliant because he has not publicly posted source is incorrect. > With respect to Naughton, It appears that these companies are not in > compliance with sub clause A, B That's Naughton's claim - and as I've pointed out, it's a bogus claim. If Naughton has asked for code and been rebuffed, that would be a violation. If Naughton has failed to find a download site, that is not. > The point you apparently are missing is that if you ship a product, you have > to ship the code or a written offer The point *you* are missing is that if you ship a product, you have to ship the code or a written offer. See the "OR" in the middle of that? That's good, because that's a word that actually is there. Moreover, that written promise needs to be fulfilled if it's ever called in. But if it isn't called in, then not passing the source to anyone is not a violation of the licence. > Not having it available on your website, and not providing a written offer > violates both sub clauses A and B. But no-one is talking about not making the written offer. That's something that's come from the voices in yer heid. You might like to talk to someone about those... Not having the code available no your website, but making the requisite written offer constitutes compliance with the GPLv2 section 3(b). And that's all there is to it, really... > Therefore these sites are not in compliance and are open to litigation. Do you have evidence that they are not making the written offer? > Clearly reading and understanding what is written in a contract is not your strong suit. Oooh, once again you make claims about the comprehension capabilities of others when the real problem is a combination of your inability to follow basic logic and your desire to fabricate information about what other people are doing. > If not, a subsection b defense would also be invalid, and they are in non-compliance. That is correct. But if they have made the offer, they are in compliance - at least until someone requests the code. Easy, isn't it? @ Chris W > You're making a lot of fuss about a written offer making the software compliant. Yes. That's because that is the wording of the licence. > Could you please define written in legal terms? No. That's up to you to get sorted out. I'm just reiterating what the licence *says*, rather than what some people would have you believe it says. These are not my words... >I'm just reiterating what the licence *says* No you are not "just" reiterating you are going further. Clause 3b may make provision for not distributing nor publishing source code but here in the real world can you name one piece of GPL licensed software that comes with a written offer as provided for by clause 3b? It *IS* easy (when you ignore the point) You didn't answer my question, you just supposed they provide a offer, let me re-ask the important part: Did these companies always provide a written offer to supply the code upon request? If they did (at all times), they are good. If they did not at any point, their license was revoked. If so they have to contact every copyright holder and secure a new license (which was really the part of the article that was surprising to me. If there was or was not a request is irrelevant unless the offer has been made. This is an honest question, I don't own any of these companies products, and haven't read the paperwork that came with the phones purchased by others so lack even anecdotal evidence to this effect. > No you are not "just" reiterating Yes, I am. > you are going further. > here in the real world can you name one piece of GPL licensed software > that comes with a written offer as provided for by clause 3b? Irrelevant. That few (if any) distributors choose the difficult way of complying with the licence does *not* mean that that method is non-compliant. Of course everyone sane just ships the code on a web site - that makes more sense than anythnig else. But that does *not* mean that doing it differently constitutes non-compliance - it simply doesn't. It constitutes idiocy, but that's not a GPL violation. > you just supposed they provide a offer I have no reason to believe otherwise. If you have evidence, do post... > If so they have to contact every copyright holder and secure a new license Yes. This is true. But because the Free Software community is interested in enabling distribution, rather than preventing it, I'm unaware of an instance where such a re-licencing was not forthcoming, once any violations had been sorted out. > so lack even anecdotal evidence to this effect As do most people. Shall we presume these people innocent unless evidence of their guilt comes along? Why pick on Android (though it is Open Season, of course)? What about gear like routers, Tivo's, satnav's, and a gazillion other hi-tech widgets which run on something Linux. I think this guy just wanted some free advertising or his Warhol15. Why pick on it? 100% of the Android devices in my house are in GPL breach. There are binary drivers linked into the kernel which do not have the source published and the kernel is modified without the patches published either. I suspect the rest of the Android ecosystem is not any different from what I have on my desk. Google saying Android is "open" does not make it exempt from the GPL. (What "open" means for Google is open for debate) If you know more non-GPL complaint products, feel free to contact the authors and warn them about those license infringements. Looks like he is bored of chasing ambulances... ...and using the same 'skills' on another route. Maybe he's hoping that if he bashes Android enough then The Cult of Jobs will offer him a highly paid job to stir up superFUD. another slow news day 1: a casual glance at the list confirms what I expected, a long list of piss poor, low end devices thrown together in Chinese sweatshops are non-compliant. Good luck convincing the Chinese to do anything about that and try not to be surprised if none of us are surprised. After all these shady companies tried building fake Android devices before realising they could just grab the real thing! 2: the GPL doesn't specify a time limit on supplying source and it's (unfortunately) fairly common for it to take a few weeks. There's slop in the system because of that and swift enforcement isn't really an option. 3: to date many companies have had to be nudged into releasing the source *faster* by eager modders. There's been no panic from copyright holders and very little feeling any of the companies within reach of the law aren't going to comply eventually. 4: Naughton misrepresents how enforcement is usually handled. Delay too long and yes, the licence is declared void but getting compliant and saying 'sorry' almost always get's it reinstated - albeit often with a legally binding agreement not to do it again. Less of a time-bomb, more of a rubber mallet to compel compliance! There will be companies that flout the licensing and inevitably some will be within reach of our courts and end up on the wrong end of a court. That's not a specifically Android problem, that's the same corporate theft a long succession of scumbags have tried ever since the GPL was created. It's sad that the Reg's ongoing war against Android has sunk to this level. Couldn't you find a real story to beat on Google and/or Android with? Slow news day? The story "Google buys Motorola" was put on El Reg two minutes before your post. Slow news day it isn't! "2: the GPL doesn't specify a time limit on supplying source and it's (unfortunately) fairly common for it to take a few weeks. There's slop in the system because of that and swift enforcement isn't really an option." No it doesn't, however there is an implied reasonableness. Please understand what it would take for someone to raise a lawsuit against said company that is in violation... 1) A written request has to be made asking for the source and notifying said company that they are not in compliance with the GPL. 2) There has to be some amount of time for said company to respond to the letter. 3) Person goes to lawyer. Repeats step 1 & 2 [Note: They could go directly to a lawyer...] 4) Lawyer files lawsuit and starts the process rolling. I don't know what fantasy world you live in, but it takes time to start a lawsuit. With respect to point 4: "4: Naughton misrepresents how enforcement is usually handled. Delay too long and yes, the licence is declared void but getting compliant and saying 'sorry' almost always get's it reinstated - albeit often with a legally binding agreement not to do it again. Less of a time-bomb, more of a rubber mallet to compel compliance!" Naughton isn't misrepresenting anything. To your point, one way to end litigation is to become compliant. But that does mean that you are open to litigation in the first place. And that's the point. I like the GPL and GPL friendly companies like Apple. They modded the Gnu C compiler to handle Objective-C so they gave back their modded version. They played by the rules and we all prospered. This entire story is frankly absurd! Most of the major Android players (e.g. HTC, Samsung etc) do comply with the GPL although often not as quickly as perhaps they should. To say this "threat" is hanging over Android, is a nonsense because all any individual phone/tablet manufacturer needs to do to comply with the GPL is publish the kernel source which isn't exactly rocket science. Also, did the author of this article even look at the link which shows the list of GPL compliant/breaching list of tablets? It's so woefully out of date it's untrue (no Xoom, Transformer, GTab 10.1 etc), and when you look closely you'll see that the vast majority of the tablets in breach are the cheapo no-name Chinese clones who basically don't give a monkeys about the GPL. I've never really understood Why people didn't just use something like NetBSD instead. Free and open source, but very liberally licensed indeed. Sure, it might not be quite as capable as its linux equivalents in various situations, but then neither is android. Given the stress involved with handling the GPL in a corporate environment that has to deal with inconvenient things like closed source drivers for some system components it would seem like a bit of a no-brainer... especially given the money and talent available to google. NetBSD is not supported by driver / chipset companies who usually provide SDKs based on WinCE and Linux. If NetBSD were compelling for its licence it would be the defacto choice for such solutions already. As a general point it's nowhere near as robust, world proven as Linux either and is probably severely deficient especially for multicore architectures. Anyway there is nothing to stop Android switching it were proved to be worth it. The entire userland of Android is BSD based and I expect most apps really don't know or care what kernel is underneath it all. Google would make the appropriate changes and throw a BSD kernel and userland would be much the same. In this instance the "violation" is a big deal about nothing. The noname chinese knockoff dists are in violation by not offering the kernel source, but then again they probably just grab and build the kernels straight from the Android repository. So yeah they're in violation, no there isn't any amazing trade secrets behind it all. If you look at that list every manufacturer you've ever heard of (with the single exception of Archos) is compliant. The rest are mostly obscure Chinese companies that make tablets and ... tablets. Good luck suing them. The arguments about license laundering are equally silly: if Google reimplements the functionality of GPL'd code then their code is, unless you can prove direct copying, different code - no licence issues arise. So again - good luck with your lawsuit. You'll be needing it. Re: Every manufacturer? > every manufacturer you've ever heard of (with the single exception of Archos) is compliant Archos has a download site at http://www.archos.com/support/support_tech/updates_gnu.html?country=us&lang=en - is this not compliance? > good luck with your lawsuit. You'll be needing it. Again, reading is fundamental. According the the site that has the list, Archos is compliant on all but one product. the Archos 7. Thus Archos isn't in full compliance and has potential litigation issues. However I think the point is that there are other companies that are not compliant at all. > According the the site that has the list, Archos is compliant on all but one product. the Archos 7. I'm not overly familiar with the Archos line of products, but http://www.archos.com/support/download/software/sources/Archos7HT_GPL.tgz would seem to be the GPL components of the Archos 7 HT. > Thus Archos isn't in full compliance You've not demonstrated that. You've just reiterated someone else's claim - which might well be wrong. At the risk of repeating what several other comments have pointed out: making the source available is NOT ENOUGH. In addition, you MUST provide a written notice of availability, and it would be no surprise to me if this notice was missing from the documentation (that is itself usually missing) from the packaging of many new devices. @ First Dave > you MUST provide a written notice of availability That's what I've been arguing... > it would be no surprise to me if this notice was missing If you have evidence of transgression, go ahead and post it. Supposition really doesn't cut the mustard. Anyway, aren't pure clones okay? If I just install some standard Linux distro on a PC, even to sell the PC, I don't think I'm obliged to hand around the source code. I may not -have- the source code. I could be wrong about that. Aren't the minor Chinese Android tablet mawkers doing the same thing - just installing standard Android OS onto their devices? As witness, the enthusiasts who take and install a newer edition of Android on the same devices. I'm not sure about the legality of that - nor about Android 3 being not-yet-open source - but that it can be done at all, demonstrates that the hardware is standard and the software essentially unchanged, imo. The -point- of GPL is that typical use of the software -isn't- licence-encumbered. There is a licence, which is intentionally a non-encumbering one, except that if you want to encumber your copy of the software, the licence encumbers you from doing so. pure clones not okay If you "distribute" GPL-licensed binaries, you must meet the criteria for making the source code available. If you install a standard Linux distro on a PC, then sell the PC, that would normally count as noncommercial distribution so you can just include information on where to find the source code. That option is not available to commercial distributors--they must either include the source or else a written offer to provide the source to anyone that asks. Re: Anyway, aren't pure clones okay? > If I just install some standard Linux distro on a PC, even to sell the PC If you sell it, that's likely considered a commercial distribution. Thus section 3(c) of GPLv2 doesn't apply > I don't think I'm obliged to hand around the source code. You are. Both section 3(a) and section 3(b) require you to hand out source code, either with the binaries (for 3(a)) or on demand (for 3(b)). 3(c) - which allows you just to point at your upstream provider - is only available for non-commercial distribution. > I may not -have- the source code. You need to get it. You may not redistribute commercially without it. Luckily, either you were given it with your binaries, or else you have a right to ask your upstream provider for it. Possibly both. > I could be wrong about that. > Aren't the minor Chinese Android tablet mawkers doing the same thing Doesn't matter. If they are not distributing source, or at least offering to do so, they are in violation of the licence. That can be fixed - but it needs action. > The -point- of GPL is that typical use of the software -isn't- licence-encumbered. Well - that's *one* of the points. The other is that GPL software cannot be made proprietary. I see that as rather more important. Luckily for all of us, compliance with the GPL is *easy*. It amazes me how much effort certain companies will expend in trying - and ultimately failing - to circumvent the GPL. GPLv3 alllows for automatic reinstatement of licence benefits once any violations have been sorted out. It was considered a bug in GPLv2 that such automatic reinstatement was not possible - nevertheless, the copyright owner is entirely capable of reinstating rights under GPLv2 once he has decided that and transgressions have been put right. make more profit, sell the source I don't see why you couldn't do this, if you sell PCs: 1 "super-business" PC with Super Business Linux(*) pre-installed ₤ 1000 box of Super Business Linux installation DVDs (for just in case) ₤ 25 box of Super Business Linux source code DVDs (recommended for legal reasons) ₤ 25 (*) Super Business Linux = xubuntu with a different background Some people are missing the fact that google did release GPL source code for android 3.x. Only apache license code is not released yet. Did you read the article? That may be fine from Google's side, but that not where the problem is. Are the actual Android distributors (HTC, Samsung, etc) distributing the source code for the GPLed software they include on their products, as required by the license? Seems they aren't. Re: Did you read the article? > Are the actual Android distributors (HTC, Samsung, etc) distributing the source code HTC have quite a bit at http://htcdev.com/devcenter/downloads Samsung has https://opensource.samsung.com/ So yes, they do appear to be distributing source. I've not checked compliance for every single binary they ship, but they do appear at least to be attempting to comply. > It's the Chinese cowboys that know they're beyond the reach of law that aren't bothering. I'm not sure about that; my experience of dealing with Chinese manufacturers is that they frequently do not understand their obligations. I was buying in rather nice little embedded PCs for a while. They ran a Linux distro that the manufacturer had rolled together - which was good, because the CPU wasn't *all* that x86-compatible. But when I asked for source, they just pointed me at their upstream - they believed that to be sufficient until I had a few words... > I'm still struggling to understand why the Register sold it's soul like this. There was an article the last time Naughton opened his trap. Not quite so uncritical that time, though. Please re-read the article, especially the section mentioning the list of non-compliant hardware maintained by Matthew Garret. Apparently they are withholding important parts of the system. @AC (why is it always ACs..) > Apparently they are withholding important parts of the system. If any manufacturer is witholding GPL code, then that is an issue FOR THAT MANUFACTURER. It is not a "legal time-bomb for Android". It is simply a copyright issue the same as using any non-licenced code would be. It is not an Android problem. SFLC lawyer puts Naughton right On Groklaw, at http://www.groklaw.net/article.php?story=20110815131443415 - Hi-torque tank engines: EXTREME car hacking with The Register - Review What's MISSING on Amazon Fire Phone... and why it WON'T set the world alight - Product round-up Trousers down for six of the best affordable Androids - Antique Code Show World of Warcraft then and now: From Orcs and Humans to Warlords of Draenor - Why did it take antivirus giants YEARS to drill into super-scary Regin? Symantec responds...
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008218.28/warc/CC-MAIN-20141125155648-00205-ip-10-235-23-156.ec2.internal.warc.gz
CC-MAIN-2014-49
34,287
270
https://fr.slideserve.com/clare/parallel-and-distributed-computing-overview-and-syllabus
code
Parallel and Distributed Computing Overview and Syllabus Professor Johnnie Baker Guest Lecturer: Robert Walker Instructors • Professor Johnnie W. Baker • Primary Lecturer • Professor Robert Walker • Guest lectures on specific architectures • Guest lecture on his groups VLSI work on parallel architectures • Possible guest lecturers from parallel processing group • Lecture in areas of expertise • Occasionally cover classes when I am away Prerequisites • Designed to be accessible to all graduate students in computer science • Students who are not CS graduate students may also be qualified to take course Textbook and References • Textbook • Parallel Programming in C with MPI and OpenMP • Michael Quinn, author • Published by McGraw Hill in 2004 • References for Supplementary Reading • Classroom Slides will also include additional information from a wide range of sources • Any additional needed reference material will be handed out or posted on course website Some Features of Course • Includes coverage of fundamental concepts of parallel computation, rather than focusing only on latest trends. • Often quickly outdated due to rapid technological changes • Also covers the currently popular cluster architectures, shared memory processors, and the MP language. • This is the focus of the Quinn Textbook Course Features (cont.) • Covers major types of parallel computation by looking at three key features of each: • Typical architectural features • Typical programming languages used • Typical algorithm design techniques used. Some Specific Topics • Fundamental concepts applicable to all parallel computation. • Asynchronous (MIMD) distributed memory computation • Message passing communications • Programming using the MPI Language • Architectural features • Examples of typical algorithms Specific Topics (cont.) • Asynchronous (MIMD) shared memory computation • Symmetric Multiprocessors or SMPs • OpenMP language overview • Synchronous Computation • SIMD, vector, pipeline computing • Associative and Multi-Associative Computing • Programming using the ASC language • MultiC language overview • Fortran 90 and HPF Language overviews • Algorithm examples Specific Topics (cont.) • Interconnection Networks • Specific Computer Examples including 2D mesh, hypercube, etc. • Synchronous and asynchronous considerations • MIMD-SIMD Comparisons in Real-Time Applications Some Benefits of Course • While principal focus is on parallel computation, most information is applicable to distributed computing. • There is a wide choice of thesis and dissertation topics in this area • Several professors in department work in this area or make major use of parallel computation in their research • Students working on a thesis or dissertation in another area may benefit from being able to use parallel computation in this work. Benefits (cont.) • Most large computational problems require a parallel or distributed system to satisfy the speed and memory requirements • Parallel computation currently has major advantages over both distributed computation and grid computation for computational intensive problems. • Programs are normally much simpler • Architectures are much cheaper • Grid computing is currently fairly futuristic Two Complementary Courses • Parallel & Distributed Computing (Fall) • Architectures • Languages • Parallel Programming • Algorithm Examples for some architectures • Parallel & Distributed Algorithms (Spring) • Important Parallel Models • Designing Efficient Algorithms for Various Models • Will be offered in Spring 2007 • PDC and PDA can be taken in either order • Preference is for PDC to be taken first Assignments and Grading • Homework assignments • Problems assigned for most chapters • Probably 5-7 different assignments • Some assignments will involve programming • Course Grade • Based on homework, midterm, and final • Approximate weights • Homework 40% • Midterm Exam 30% • Final Exam 30% Documented Disabilities • If you have documented disabilities, please contact me and make me aware of your needs. • For information on disability accommodations, support, and verification procedure, please see www.kent.edu/sds Course Website • Will be established quickly • Class slides, assignments, and some references will be posted on this website. • Also, an online reference textbook and a pointer to a second online textbook will be available at this site. • First Assignment – Read Chapter 1 in textbook.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00224.warc.gz
CC-MAIN-2021-25
4,612
15
http://www.gotapex.com/threads/88485-anyone-have-their-cwna-here
code
i am going to take the exam friday morning. i have been planning on taking it for the last 18 months, but I finally got my act together, did a quick reread of the material, and scheduled the exam. how hard is it in comparison to the official practice tests? has anyone completed the cwsp? thats my next step. i will do that hopefully by fall. i have the acsa and possibly the CEH before that.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049615/warc/CC-MAIN-20131204131729-00013-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
392
3
https://help.goodgrants.com/hc/en-gb/articles/360002024875-Can-I-update-user-passwords-
code
All user accounts require a password; a combination of letters, numbers and symbols that must be a minimum of 12 characters long. Who can update a password? The user can update and manage their own password by: - Logging into an account - Go to their user profile by clicking on their name in the top-right of the screen: Name > Profile - Update password and save But what if they can't remember their password? All users can reset their password on the home page. From the login form, there is a Forgot password link. Clicking this link will allow the user to input their registered email address (or mobile number) and an email (or SMS) will then be sent to them with a one-click login link. The link will expire after 60 minutes and can only be used once. Can I reset someone else's password? In almost all cases, it's recommended that you ask your users to manage their own profiles, including their passwords. A grant manager may be able to change a user's password from the user menu but with a condition! If a user is registered with Good Grants and their profile is associated with only your program, then you will be able to update the user's password. However, if the user profile is associated with multiple programs, then the password can only be updated by the owner of the profile. This restriction is a security precaution, as changing the user's password will affect their access to other programs too.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00543.warc.gz
CC-MAIN-2021-25
1,418
13
http://www.perlmonks.org/index.pl?node_id=988169
code
From what Reini tells me, he is mostly focused on making some fundamental changes to the Perl 5 language (unlikely to be adopted) Reini is the only one working with ALL of Perl 5, not just a subset of Perl 5. *Bzzzt!* Does not compute! Does not compute! You can't be working for all of perl if, in the meantime, you're changing the language. So, what parts of Perl would he be changing, syntax-wise or semantics-wise? If you're trying to compile all of perl, then you may just have to rewrite the parts that it would like to handle differently. That rewrite should be done by the compiler or a precompiler, not by a programmer.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.54/warc/CC-MAIN-20180320205242-20180320225242-00459.warc.gz
CC-MAIN-2018-13
627
6
https://forums.xilinx.com/t5/Design-and-Debug-Techniques-Blog/bg-p/support_blog/label-name/ai%20engine%20series
code
In the first 3 articles of the AI Engine Series, we went through the different files needed for an AI Engine application. In this entry we will run the AI Engine compiler for an X86 target and have a look at the different output it produces. Inthe previous entry in the AI Engine Series, we had a look into the graph file which is the top level of the AI Engine application. We have seen how this graph file is used to instantiate and connect kernels together and to the ports of the AI Engine array. In this entry we will look at the kernel. In the template we are looking at, the 2 kernels called first and second are implementing the same function which is called simple. Inthe previous article, we had a first look at an AI Engine (AIE) application for Versal within the Vitis 2020.2 unified software platform. We have seen the structure of an AIE application project and how an AIE graph is connected to a simulation platform. We also looked at some APIs to initialize, run and terminate the graph. In this article we will have a closer look at the AIE graph inside the project. Last July, in the article titled Versal ACAP AI Engines for Dummies I introduced the AI Engine (AIE) array which is present in some Versal™ ACAP devices. In this new series of articles, the AI Engine Series, we will provide some examples of how to use the AI Engine tools integrated into the Vitis™ 2020.2 unified software platform. This first article is an introduction to the AIE programming environment.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00166.warc.gz
CC-MAIN-2021-17
1,494
7
https://support.mozilla.org/sq/questions/956196
code
Kjo rrjedhë është arkivuar. Ju lutemi, bëni një pyetje të re, nëse keni nevojë për ndihmë. How do I stop automatic loading of the next page as I scroll to the bottom of an internet page that has a number of pages? When I search with Google, as I scroll to near the bottom of the page, the next page of results automatically loads. I can never get to and use the similar searches, or "Searches related to ***", which are at the bottom of the page, as the next page loads and focuses at the top of that next page. Also, there is a bar with the page number, and a field with a "Jump to" command with a pop down arrow with page numbers, and up and down arrows beside it which go up or down one page if used. This does not happen with, say, Bing, nor if I use Int Expl (aagh). I have no autopage type of add-ons, or plugins.. I may have had an add-on at one time. There are a bunch of autopager "strings", or lines, in about:config. How do you remove them, if that is the problem? How do I stop this from happening? It is extremely annoying. Krejt Përgjigjet (1) hello, can you try to replicate this behaviour when you launch firefox in safe mode once? if not, maybe an addon is interfering here...
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816875.61/warc/CC-MAIN-20240414064633-20240414094633-00823.warc.gz
CC-MAIN-2024-18
1,204
8
https://www.freelancer.de/job-search/extract-email-name-last-name/
code
[login to view URL] we want to integrate the same process in our website. [login to view URL] we want to integrate the same process in our website. Werde Teil eines motivierten Teams aus München! Wir arbeiten bereits an den letzten Schritten einer neuen innovativen Immo-Plattform. Nun wollen wir noch einmal alles in das Fine-Tuning stecken. Dabei gemeinsam Spaß zu haben ist uns dabei genauso wichtig wie eine tolle UI oder ein skalierbares Backend :) Bevorzugt freuen wir uns auf Coder, die gerne nach einer Probephase auch fes... I need a macro that can extract data from a PDF invoice to a Excel file. Extract word by word all text and meta data from an html page. Save billions of html pages in a structured way in a sql database so you can perform analysis on words and tags, minimizing storage space required and maximizing performance and still be able to reconstruct the html page with the same text, including punctuation marks and tags. See instructions I'm working on my consultancy firm and I need a data extraction/engineer/entry guy who can extract leads from multiple sources. The data involves extracting companies' email address, name, website, contact person if they fit the criteria. I will tell you the sources where to fetch the data from, but in any case, having experience in this field is a Ionic Expert need to for Extract Code from Apk Then Build New Apk,- Only Expert Bid here Need voter list in text , which is in pdf format and contain image which is not easy to convert in text . so I need these data in text(excel). There are three links from where I need the data :- every link contains the data of related state in india. Madhyapradesh (MP) – [login to view URL] Rajasthan(RJ) -- [login to view URL] Chattisgarh(CG) – [login to view ... ...competent person to compile a targeted email list on linked in using Python or another language. We do not want a webscraper to do this manually. Must use a script. Manual will take too long. The FINAL product should just be a long excel list with 5 columns as follows. last name |first name | company | position | email Step 1: Narrow search for candidat... I am looking to extract vast amounts of email addresses of brokers from real estate agent websites in New York. I need someone to produce code/scraper that can extract their email addresses from various website databases e.g. [login to view URL] I will also require a macro written up that allows me to email them individually. Thanks I need someone to change the post date on my wordpress theme from "post created date" to "last updated date". This should be a quick 5-10 solution. It is needed in category post dates as well as single post pages. pictures and website following project award Easy and quick project. My website currendly uses an old version of ffmpeg and i need the last stable version in order to work properly. So i need somebody to unistall and install the new version. Since this is a quick job, teamviewer access will be given to work from my machine. I need you to develop some software for me. I would like this software to be developed for Windows using .NET. I need to extract news articles containing specific keywords from a set of 7 Persian news websites. I need to get the URL, the title, the date and the content of the articles in any text format. Extract data from a website with pagination. Need to get the data to Excel file. Not a complex data extract, but since it has pagination I am not able to get it using available tools. Developer with Excel VBA knowledge & web extracts should be able to do it easily. I need someone to setup extractor for data from a website. I need to do it using ParseHub. I need someone to be able to extract following. FrontPage: Name of product Price of Product Pictures of Product Details of product BackPage (from Source Code) SKU Vendor Data from: Mirrorcity dot com dot au I can provide a guide that is intended as an overview of the functionality of the VeloCloud SDK, the underlying API, and data model on which it operates. The complete technical API documentation is distributed as a Swagger schema specification along with the SDK package and is a necessary companion to this document for developers. Hello Everyone!!! This will be a unique and successful platform. The business strategy and what we are offering is one of kind. It will be one of top 10 Crypto-Currency entities the world. If you're looking for long term then stop and bid because this is for you, this once in a life time opportunity...DO NOT MISS IT!!! Languages Required: (PHP / BootStrap / MySQL / HTML5 / CSS3). Requireme... I have a 50 seconds video in which I have a speech but there is a noise from the cars which are passing by. It is possible to keep my speech (to be able to hear it well) and to delete the noise? Hi, I'm looking for someone to help me extract numeric data from ~150 pdf documents available online and transpose it to an excel sheet. I estimate the job will take about 6 to 7 hours. Hello all. I a...script to get data from trip advisor for all 50 states in the US. For example: [login to view URL] Your task would be to extract all the 'things to do' in this page. Save each information for each state in a different excel sheet. Links to all the 50 states are in the document. I need you to provide me with thousends of Email from immigration and law companies based in Dubai and South Africa I need a coder who has experience with data extraction. The project details are in the word doc attached. This will be $100/month, $25 a week. Please don't bid if you don't have experience with IP cycling/blocking. I have a statistic website that needs a few parts to be fixed. The entire website is statistic the only part dynamic is the form generator but the form generator is not important. Send me a message so we can get this project done already. Send me a message so i can give you more details. Please only serious developers. Thank you ...tools you know (let us know what you are good at as this will help us choose you) -When you find a “Provider”, you will note down the following info: -Price per hour -Company name -Address & Number -Website & Social Media Links -Find nearest subway station to office (if any) -Opening hours for contact/visit. -At the end of the day, you should have ...product information for the link below: [login to view URL] Extract ALL DATA must be captured in key-value pair, creating all the necessary keys to reflect the attributes presented by manufacturer. The info in images like; "not made with natural I need to build a data for the price of Ethx over the last year You need to create an account on [login to view URL], you will get 10 ethx by this link: [login to view URL] If you already have, write your email in the bid. ...some of the comments from the original developer. You may have a better idea. Speed is important… very important. My goal… load email list into a database and extract all of the email addresses that have the same email domain as a URL in my domain list. Example domain [login to view URL] [login to view URL] [login to view URL] Example text string c... We need emails of updated sales reps in ALL USA Name, Email and source Please suggest cost of 1000 emails of reps, 10,000, 100K etc We need to get updated resume emails and the source you got it from is a job site Please let us know Thx I need a freelancer who can Extract Product Details from Magento Website and create CSV File. ...table "Data", I want to create a PDF form containing the information for that record. In addition, when my client returns the form (with updated values), I want access to extract the new/updated information and store it in the table called "Data". Attached is the template for the PDF FORM, as well as a small access database with a table containing all I need to be able to do the following: - upload 1 file to extract my fields and export to a pdf document - point to a file location to extract the fields from multiple files and export to a folder on local filesystem Languages used: - c# - .net - itextsharp In drupal 7, the Statistics Counter including a "Content statistics: Views this month" in views, but i...statistics: Views this month" in views, but it reset at the first date of every month, all the figures get lost and restart again. We need to show the statistics in views for last XX days continually. only bid if you know how to do this . Thanks This is not for a company. I am trying to streamline a process that...would like to change things up a bit...it would be great to receive the SMS and run the copy to application on a VPS. I need a cost effective solution to receive SMS on VPS->extract key data-> automatically enter that data to the next application->submit (complete) the transaction. We need people with experience using Chronoscan software to extract information from scanned documents using OCR and save the extracted information into xml files. This job is a long term contract. Business involved in Travel and sales of hotel accommodation. What we need is below from developers only with previous experience in similar projects in travel industry. To Extract the Data found in webservice "getFullHotelRates", included in the "ChannelManagerService" endpoint. Information on CAVAL webservices is located in the website http://caval
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158001.44/warc/CC-MAIN-20180922005340-20180922025740-00197.warc.gz
CC-MAIN-2018-39
9,343
36
http://www.bandai.com/naruto/showpost.php?p=1334311&postcount=16
code
don't use Sand Wall. Sand Tomb is more better. Double Sand Blade works well with Infinite Embrace and Dreams of the Past. if you like that Jutsu's effect then use IP + Advisor of the Sand + Wind Style: Infinite Sand Storm Devastation. or just put in play your Gaara&Temari squad if you want your Satoosa Ninja! Last edited by kirax7 : 09-21-2012 at 02:09 AM.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705284037/warc/CC-MAIN-20130516115444-00067-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
358
4
http://www.mapforums.com/server-busy-problem-2745.html?s=23c478c562346829d5defabb1c0d94c7
code
I found a perfect solution on this website to solve the "server busy " error , by closing all running mappoint processes. Dim proc As New Process Dim procs As Process() Dim i As Integer procs = proc.GetProcessesByName("MapPoint") If procs.Length = 0 Then Return 'MP wasn't loaded, so return For i = 0 To UBound(procs) This code is only working in VB.NET. Can anyone please help to give alternative code in VB6? Thanks a lot ,
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719468.5/warc/CC-MAIN-20161020183839-00454-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
425
10
https://rothfarb.faculty.music.ucsb.edu/courses/103/subject_answer.html
code
Fugue: Subject and Tonal Answer In a tonal answer, scale degree 1 (SD1) in the subject is answered by SD5 in the answer and, conversely, SD5 in the subject is answered by SD1 in the answer. Intervals of a subject that ranges at its beginning in the lower part of the scale (SD1-5) will appear contracted in the answer (in the range of SD5-8), e.g. leap of a fifth in the subject contracts to a leap of a fourth in the answer, a third to a second, a second to a unison. Conversely, intervals of a subject that ranges at its beginning in the upper part of the scale (SD5-8) will appear expanded in the answer (in the range of SD1-5), e.g. a leap of a fourth in the subject expands to a fifth in the answer, a unison to a second, a second to a third. These intervallic contractions and expansions are accommodations to the hold of tonic harmony, which generally prevails at the close of the subject, as the answer starts. The contractions and expansions in the answer last only until the prevailing tonic harmony shifts toward the dominant. As that shift occurs, the intervals of the subject are duplicated in the answer such that the answer is an exact transposition of the subject a fifth higher. Example 1: WTC I, Fugue 2 (C minor)—expansion The subject begins on SD5, signaling a tonal answer. The subject ranges at its beginning in the upper part of the C-minor scale (between SD5-8). The descending 4th leap in m. 1, from c2 to g1, expands in the answer to a descending fifth, from g2 to c2. Because of that expansion, the next interval of the subject, a rising minor second from g1 to ab1 (m. 1) also expands to become a rising minor third, c2 to eb2 (m. 3). The remaining notes of the answer are exact transpositions of the subject a fifth higher. Example 2: WTC 1, Fugue 11 (F major)—expansion The subject begins on SD5, signaling a tonal answer. The initial interval of the subject, a rising major second from c1 to d1, expands in the answer to a major third from f1 to a1 in mm. 4-5. The remaining notes of the answer are exact fifth transpositions of the subject. Example 3: WTC 1, Fugue 17 (Ab major)—contraction SD5 appears near the beginning of the subject (its second note), signaling a tonal answer. The ascending fifth at the subject’s opening, ab-eb1, contracts in the answer to an ascending fourth, eb-ab. That contraction causes a contraction of the subject’s descending minor third at its opening, eb1-c1 (second and third notes) to become a minor second, ab-g, in the answer (m. 2). The remaining notes of the answer are fifth transpositions of the subject. Example 4: WTC 1, Fugue 24 (B minor)—contraction This subject is one of the few modulating ones in volume 1 of the WTC. It modulates from B minor to F# minor. SD5 appearing as the first note of the subject signals a tonal answer. The answer to a modulating subject has the task of modulating back to tonic to prepare for the third subject entry, which will be in tonic (see m. 9). The descending major third, f#1-d1, at the opening of the subject—part of a descending arpeggiation of the tonic triad—contracts in the answer to become a major second, b-a (m. 4). Further, the subject’s minor second, g1-f#1 (following the ascending minor-sixth leap from b to g1, m. 1) expands (!) to become a minor third, d1-b (m. 4). Because the answer must modulate back to tonic, the remaining intervals of the answer are not fifth but rather fourth transpositions of the subject, which causes the answer to modulate back to tonic. Had the remaining intervals been fifth transpositions of the subject, the answer would have modulated to the dominant of the dominant, i.e. to the In “real” answers, where SD5 does not occur prominently at or near the beginning of the subject, SD1 is answered by SD5, and SD5 by SD2. In WTC 1, real answers may be found in fugue numbers 1 (C), 4 (c#), 5 (D), 6 (d), 9 (E), 10 (e; the only two-voice fugue in WTC 1), 14 (f#), 15 (G), and 20 (a). Other modulatory subjects in WTC 1 are fugue number 7, in Eb major (which, however, modulates back to tonic before its close); and number 18, in G# minor.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00193.warc.gz
CC-MAIN-2022-40
4,119
28
https://www.sqa.org.uk/e-learning/NetInf205CD/page_04.htm
code
Planning for Network Protocol Security Your initial design for your network infrastructure should include the types of protocols to be used. You should check these protocols carefully to see if there are any known security vulnerabilities associated with them. Many IP-based services such as FTP, TELNET, and HTTP use clear text by default. You will need to find an alternative way of securing that traffic. The best way of doing this is to use an IPSec Security policy. Servers run many different services, so it is critical for you to know what ports and protocols these services use. Once you have identified the services and protocols on your network, you can configure ports using IP filtering on your firewalls to allow specific types of traffic. A useful approach is to initially deny everything and then open up only the ports you need. The table below lists popular server services and their associated ports and protocols.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304859.70/warc/CC-MAIN-20220125160159-20220125190159-00208.warc.gz
CC-MAIN-2022-05
932
4
https://pythonawesome.com/10-best-ear-defenders-for-construction/
code
When it comes to buying Ear Defenders For Construction , you will find many brands selling the same product. And this will make you confused because you will not know which one of these products will meet your needs and desires. After a lot of extensive research, we’ve put together a collection of the best Ear Defenders For Construction that are currently available on the market. Best Ear Defenders For Construction Best Ear Defenders For Construction Reviews - ✅ Ideal to block out noises caused by shooting, airports, hunting, sports events, studying, woodworking, mowing etc. NRR 32dB, ANSI S3.19 (US) certified - ✅ Compact folding design for efficient storage and convenient portability, fits well in a range bag or backpack - ✅ Design with adjustable metal prop and 360 rotatable ear cup fits most head sizes and shapes from kids to adults - ✅ Constructed by solid ABS-shell and thickened noise-dampening foam to block out noise greatly - ✅ The stylish and comfortable hearing protectors are ideal gifts for your family and friends - Ergonomic headband with soft padding reduces the pressure on your head for superior comfort. Generous space inside the ear cups ensure breathability meanwhile soft imitation leather provides a tight sound-proof seal - Constructed by solid ABS-shell and thickened noise-dampening foam, the ear muffs offer a NRR 28dB rating and block out noise by a great amount, ANSI S3.19 (US) certified - Adjustable headband and 360 rotatable ear cups with comfortable foam can be adjusted to fit most sizes from kids to adults (pull / push the earmuffs from the metal string to adjust the size) - Compact folding design for easy storage and convenient portability; the noise-canceling headphones fold up nicely to fit in a range bag, backpack or briefcase while take only a small space - The stylish hearing protectors are ideal for shooting, hunting, sports events, studying, woodworking projects and lawn care, extremely suitable for operating heavy machinery or landscaping business - 🎧【Strong Insulation】The ultrathin ear muffs with thickened noise-dampening foam offer a NRR 35dB rating and block out noise by a great amount, ANSI S3.19 (US) certified - 🎧【Wide Application】Ideal to block out noises caused by shooting, hunting, airport, sports events, studying, woodworking projects and lawn care, extremely suitable for operating heavy machinery or landscaping business - 🎧【Ultra Comfort Design】Upgraded metal headband ensures long-term durability; Generous space inside the ear cups ensure breathability and comfortability, provide a tight sound-proof seal to silence your world “In A Snap” - 🎧【Suit Various Sizes】Designed with retractable stainless steel and 360° rotatable ear cups can be adjusted to fit most sizes from kids to adults - 🎧【Extremely Durable】Low profile with trustworthy quality makes it lightweight but sturdy to ensure a longer duration - Fits Defender Safety H1-CH Safety Helmet and most Hard hats - Clips in Easily to Side of Helmet or Hard Hat - Features -24dB Ear Protection - Rotates up & down for easy mounting and un-mounting to the ear - ANSI S3.19 & CE EN352-1 CERTIFIED: Lab tested and certified to US and European standards. - 📌 Ergonomic double-deck headband reduces the pressure on your head for non-stop comfort for long time wearing; Generous space inside the ear cups ensure breathability meanwhile soft imitation leather provides a tight sound-proof seal - 📌 Constructed by solid ABS-shell and thickened noise-dampening foam, the ear muffs offer a NRR 29dB rating and block out noise by a great amount, ANSI S3.19 (US) certified - 📌 360 rotatable ear cups with comfortable foam can be adjusted; Non-deforming headband can be flexibly twisted without causing any damages - 📌 Durable yet lightweight, the reliable ear protector in perfect size can fit nicely in your range bag, suitcase, backpack or cross body bag without adding extra weight or bulk, easy to take with wherever you go - 📌 The stylish hearing protectors are ideal for shooting, hunting, sports events, studying, woodworking projects and lawn care, extremely suitable for operating heavy machinery or landscaping business - MOST SOLID PROTECTION – Vanderfields DBPRO44 has an Official 32dB NRR and reduces up to 125dB. Vanderfields has High Production Standards and is used by Elite Military Forces. - GREAT FOR EVERY SITUATION – Ideal to block out noises caused by Airports, Shootings, Woodworking, large crowds, Gardening, Fireworks and household tools or other troublesome noise. - EXTREME COMFORT – Lightweight and extremely comfortable to carry around due to the foldable design. The padded ear cushion apply a comfort and snug fit. - PERFECT FIT – Perfect for Your ears and offers an Enjoyable fit! Strong & High Quality parts that make the protective ear muffs sturdy and robust. Comes with an adjustable headband for adults. - WE STAND BY YOUR SIDE – Vanderfields offers High Quality Protection Products. Our team is always ready to help you get the most out of your products. 100% Satisfaction Guaranteed! >> Get Active and Purchase Vanderfields Protective Earmuffs Now! - 35dB – Highest NRR ear defenders for shooting, sports events, concerts, festivals, fireworks. - HEARING PROTECTION: Protect your ears from extreme loud noises. Hearing protectors reduce the noise exposure level and help protect against potential hearing damage. - MULTI-USE: Ideal for construction work, ground support, hunting, shooting range and target practice, air travel, concerts and sporting events, drum and band practice, high traffic and racing zones, and enhanced quiet study. - ADJUSTABLE – Design adjustment headband of the earmuffs for a perfect fit from kid to adults. - COMFORTABLE FIT: For all day wear, with deep ear cavities for roomy comfort and an extra padded adjustable headband. - Ultra low-profile, light weight, slim design with rubberized ear cups - NRR 27 protection from noise - Compactly folds up for easy transport and storage - Features sound dampening composite housing with a comfortable headband with metal wire frame - 【Better Soundproofing】Keep noise and be quiet on your terms! Southvo ear muffs are made with 2 layers of strong professional noise-cancelling foam, a tight-sealing material, and a premium double-shell that provides NRR 26dB / SNR 32dB noise reduction. Complies with American Standard (ANSI S3.19) and European Standard (EN 352-1). - 【Super Soft and Comfort】The ergonomic headband with soft padding reduces pressure on the head for superior comfort. Generous space inside the ear cups ensures breathability, while soft imitation leather provides a tight, noise-isolating seal. - 【Fits All Sizes】 The hearing protectors made of durable ABS and PU, along with telescopic steel wire and 360° rotatable ear cups, can easily stretch or contract and bend to fit all head sizes and shapes. Compact foldable design for easy storage and easy portability. - 【Effective Hearing Protection】Suitable for ear defenders in construction sites, shooting, hunting, lawn mowing, carpentry, factories, workshops, airports, concerts, streets and any noisy environment. Safety earmuffs can help you stay focused while you work and study. - 【Notice】Be extra careful using the ear protection when outdoors, or in any situation where not being able to hear all sounds may be a safety risk. - ☊ 【Effective noise reduction】Double-layer sound-damping foam and twin-cup design provide NRR 24dB / SNR 31dB noise reduction. The ODDLOOPS noise reduction ear muffs can protect the inner ear from damage caused by loud equipment.Ideal for ear protection in construction site, Welding, mowing, train, factory etc. - ☊ 【ADJUSTABLE & PORTABLEL】ODDLOOPS Hearing protection ear muffs designed with retractable stainless steel and 360° rotatable ear cups can be adjusted to fit most head sizes, both children and adults.Featured with a foldable style, ODDLOOPS Safety ear muffs can be collapsed into the headband, providing convenient collection and carry. - ☊ 【Comfortable Fit】ODDLOOPS Earmuffs hearing protection is equipped with smooth leather filled with the soft foam. The headband is featured with soft pad to dissipate the pressure on your head. Adjustable band and the swivel ear cup, which make the safety ear muffs better embraced our ears for reducing the noise.Wide & Padded headband and soft ear pads is great for longer periods of wear. - ☊ 【Made for Hearing Protection】: ODDLOOPS Noise Reduction Safety Earmuffs can prevent ear damage when shooting, hunting, mowing or woodworking. Ideal to block noise from construction sites, factories, workshops, airports, concerts, streets and any noisy environment. Plus, the ear defenders can help you stay focused while working and studying. - ☊ 【WE PROMISE】90-DAY NO RISK MONEY BACK GUARANTEE and 1 YEAR WARRANTY. 100% Satisfaction Guaranteed! If you have any questions and suggestions, please feel free to contact us, we are here 24/7 to chat and help you In today’s market where same type of products is available from almost every brand, finding the right Ear Defenders For Construction is a challenge. Every purchase requires research. Before you buy anything you need to answer the follow questions: - What are the features of the best Ear Defenders For Construction? - How to find the right Ear Defenders For Construction within your budget? - What is the average price of a good Ear Defenders For Construction? Our data analysis platform helps you answer these questions using state of the art algorithm. We analyze thousands of reviews from real users to generate usability score for each brand. This usability score is unbiased and powered by people’s experience the product. Then we provide you with an unbiased list of 10 best affordable Ear Defenders For Construction to buy. Our goal is to make your decision making easy and shopping experience fun.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00646.warc.gz
CC-MAIN-2022-40
9,988
58
https://www.full30.com/watch/MDE3MjQ1/concealed-carry-vs-open-cary
code
There is a lot of debate between concealed carry vs open carry of firearms. There are tens of thousands of concealed carry license holders across the country. Open carry is very common in parts of this country as well. There are advantages and disadvantages to consider for each method of carrying a gun. Read the Article: https://boomsticktactical.com/2018/10/18/concealed-carry-vs-open-carry/ Pistol Build Video: https://www.full30.com/video/822fb91c4bf6b1ab7c98108c383eb422 Ozark Armament Flip Up Backup Battle Sights : https://amzn.to/2pWqWEb Ozark Armament 45 Degree Offset Flip Up Backup Sights:https://amzn.to/2COiVK9 Magpul Mbus Generation Ii Backup Sights Front & Rear Set https://amzn.to/2CijA5w Magpul Industries Quick Detach Sling Swivel https://amzn.to/2CNFVZJ Join me on Full30: https://www.full30.com/channels/boomsticktactical T-Shirts: http://boomsticktactical.com/shop/ Here are several different types of fun targets to shoot: Sink the Boat target: http://amzn.to/2uRIIff Correcting shooting errors target: http://amzn.to/2w1pbaF Shooting game targets: http://amzn.to/2veAZZz Zombie targets: http://amzn.to/2uUucli Hostage targets: http://amzn.to/2hkhGbM Pistol Poker targets: http://amzn.to/2hjXHKh Police training target: http://amzn.to/2hjkWDX Join the NRA and get a discount here: https://membership.nrahq.org/forms/signup.asp?campaignid=XI030978 Visit my Website: http://boomsticktactical.com/ Join us on Facebook: https://www.facebook.com/BoomstickTactical/
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202303.66/warc/CC-MAIN-20190320064940-20190320090940-00077.warc.gz
CC-MAIN-2019-13
1,482
4
https://virtuallyfun.com/2011/08/26/follow-up-on-dungeon-zork-for-rt-11/
code
I have documented the install steps back here, a long while back.  However recently I did get a request for a binary for this for someone to try to load up  on a physical PDP-11.  The steps sure are duanting and of course time consuming for a first time user, so while I was building dungeon again, I thought I should take this opertunity to package this up and make it more accesable for everyone. This is the output of my ‘effort’ although the real thanks to this goes to Bob Supnik ,not only for writing SIMH making it possible, but also for porting Dungeon to Fortran way back then. Extract the archive using 7zip, then run pdp11.exe and it should boot you up into RT-11.  Then just type in And you should be teleported to the open field west of a big white house with a boarded front door…
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652184.68/warc/CC-MAIN-20230605221713-20230606011713-00332.warc.gz
CC-MAIN-2023-23
808
4
http://www.computerforums.org/forums/hardware/video-card-still-does-not-work-angry-165109.html
code
Awhile ago I bought a nvidia 7600gs video card and it didn't work, I later realized that my psu was not powerful enough. I had a 305w power supply, and the card required at least 350w. So..I bought a new PSU (400w) and plugged it in, installed it, it worked fine. But, when I plug the graphics card in it still DOES NOT work. The fans spins but nothing is happening! I have no idea whats going on! I had an integrated gpu before and have never used the pci express slot. But, I do notice theres a pci-e plug on my new psu. Do I have to plug that into a slot or something on the motherboard to power up the slot somehow?? Btw, the PC is a Dell Dimension 4700, I also know everything about the BIOS, making sure it is set to recognize new hardware, etc. Also, I'm using a DVI to VGA converter, but I doubt that has anything to do with it. I have tried two different DVI to VGA converters and both of them didnt work.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719027.25/warc/CC-MAIN-20161020183839-00281-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
914
4
http://ssgwbn.xyz/archives/17099
code
Novel–Birth of the Demonic Sword–Birth of the Demonic Sword The Shadow – The Ribbon Clues Chapter 1902 1902. Second phase spray abandoned history of the britons Noah’s emotional sphere had been near the 9th rate for some time presently. The lack of the ethereal center of power was truly the only characteristic that ended it from advancing and hitting another level of your acknowledged cultivation process. The ethereal core of electrical power was unnatural, so his dantian, body, and dark colored opening felt that one thing was off after they gathered entry to its s.p.a.ce. They had the inborn encourage to fill that s.p.a.ce, they also realized that their time got still to arrive. Nights, Duanlong, plus the Demonic Sword shot away from his system and got their location around him. The parasite also attached that event by making some beginnings pierce Noah’s correct palm and nearby him with those corrosive components. Section 1902 1902. Next part Noah got two different feelings aiming to control his intellect. The earliest was the strange feeling created because of the ethereal facility of energy condensing over his back and changing into an invisible compel that remained linked to his presence in ways that even he forgotten about. Every little thing took place so quickly that Noah didn’t possess the time to look at his new psychological strength just before it vanished just as before. His intellectual walls were still stabilizing, so soreness continuing to pack his awareness and obstruct his inspections. But, he reputable his establishments of energy enough permit anything come about without his guidance. Alexander got confirmed that the breakthrough towards the ninth get ranking didn’t result in mandatory Tribulations. The ability gathered inside the Mortal Lands had turned into true. Paradise and Earth’s technique only get restrictions prior to the heroic and divine positions, however it didn’t established everything for the existences that approached the top. The new psychological vigor didn’t even fulfill a 10th from the entire s.p.a.ce into the ethereal middle of power. On top of that, a push directed it back toward Noah’s mental sphere after it got bathed among the potential of a handful of moments. Noah’s intellectual sphere had made new mental vitality during those events, however the coming with the strengthened fuel delivered every little thing into mayhem. The insides of his brain didn’t experience through appropriate alterations, but his mental health the wall surfaces does, knowning that transformation compelled him to endure a little something very much like a second breakthrough discovery. Noah got currently thought about and discovered a description for your attribute. The existences which may strategy the ninth ranking had previously spent a complete farming process making it through the 3 required Tribulations on the program. Mailing them all over again after they ended up about to get to the maximum was simply pointless. Noah’s mental sphere have been near the ninth rank for years previously. Lacking the ethereal facility of ability was the only real characteristic that halted it from moving forward and getting to another level of the regarded cultivation path. Noah got actually asked yourself and found an explanation to the attribute. The existences that might technique the ninth rank experienced actually spent a large farming experience enduring three of the mandatory Tribulations of the process. Submitting them once again once they have been planning to attain the optimum point was simply unnecessary. The immediate event didn’t delight Noah excessive. He didn’t be expecting it to occur right after performing the ethereal center of energy, but he possessed very long since been ready for doing it. Noah obtained already pondered and found a description for the attribute. The existences that may tactic the ninth get ranked obtained actually invested a whole farming experience making it through the three mandatory Tribulations of the process. Posting them all over again when they had been intending to arrive at the maximum was simply unnecessary. The ethereal centre of energy was synthetic, so his dantian, system, and dark-colored hole noticed that anything was off every time they gathered usage of its s.p.a.ce. That they had the inborn urge to load that s.p.a.ce, in addition they understood that the time got still to arrive. Noah now possessed a unique dimension defined by the edges of the planet created by his life. That area was utterly dark, heavy, and unsuitable for life, it also was an ideal culmination of his farming process. The sole problem was its incompleteness. The transparent psychological wall surfaces that transported a faint scarlet tone darkened when the strengthened mental power crashed on his or her work surface and merged because of their fabric. Noah’s imagination acquired achieved the ninth position, but the ethereal middle of energy didn’t accept that level. The latest emotional vitality could finally load the ethereal blackness without getting destroyed by its nasty demands. The newest facility of power inflated inside a dimension that noticed distinct from the s.p.a.ces that Noah developed via the Shadow Area. It didn’t even are considered exactly like the places that covered Mortal Lands amongst the void. It was subsequently something different and very private. Being familiar with dawned upon Noah when he ongoing to struggle through his discomfort. He recalled that rate 9 existences were forced to turn into worlds, with his fantastic ethereal core of power arranged the starting of that s.p.a.ce. The faint worry the fact that outdated rulers could take a step to hinder the breakthrough discovery attained Noah’s intellect. Still, he wouldn’t care and attention too much concerning this after wasting ages producing the ethereal center of electrical power within their constructions. Also, his friends decided to emerge every time they sensed their Learn shedding control over his thoughts. Noah obtained two completely different sensations aiming to manage his brain. The 1st was the strange experience developed through the ethereal heart of electrical power condensing over his back and changing into an invisible drive that stayed linked to his living in ways that even he dismissed. The mental health sphere continuing to transmit ma.s.sive quant.i.ties of emotional vigor toward the ethereal blackness, but that middle of ability didn’t admit nearly anything below the ninth get ranking. Merely the miniature brims of gas that had managed to stroll into that kingdom were able to keep on being inside that place, nonetheless they couldn’t influence that awesome construction produced from Noah’s possible. Tsuujou Kougeki ga Zentai Kougeki de Ni-kai Kougeki no Okaa-san wa Suki desu ka? Noah got definitely asked yourself and found a description to the characteristic. The existences that might strategy the ninth rate obtained currently put in a full farming process living through the three compulsory Tribulations from the system. Giving them once again whenever they ended up about to reach the top was simply pointless. Noah acquired presently asked yourself and found an explanation to the function. The existences which could strategy the ninth ranking got definitely devoted an entire cultivation journey making it through the three required Tribulations of the system. Sending them all over again once they were actually on the verge of arrive at the highest was simply unnecessary. Noah now enjoyed a unique measurement defined by the edges around the world developed by his living. That region was utterly dark colored, hefty, and unsuitable forever, but it additionally was an ideal culmination of his cultivation quest. The one matter was its incompleteness. The ethereal middle of potential was unnatural, so his dantian, body system, and dark colored hole observed that something was off when they received access to its s.p.a.ce. That they had the natural encourage to load that s.p.a.ce, in addition they understood their time had nevertheless in the future. The newest intellectual strength didn’t even fill up a 10th of your whole s.p.a.ce inside the ethereal heart of ability. Additionally, a pressure dispatched it back toward Noah’s psychological sphere after it possessed bathed among the chance of a few mere seconds. The faint worry that the older rulers could take a step to obstruct the cutting-edge arrived at Noah’s head. Still, he wouldn’t proper care a lot of about that after wasting ages getting the ethereal core of power inside their components. Furthermore, his buddies thought to appear every time they sensed their Master dropping control over his feelings. doctor who conducts post mortems Noah’s cognitive sphere have been on the verge of the ninth get ranking for many years actually. The possible lack of the ethereal centre of power was the only feature that ended it from advancing and reaching another step with the known cultivation trip. Noah now had a particular sizing based on the sides of the world put together by his lifestyle. That place was utterly dark colored, large, and unsuitable for a lifetime, additionally it was the right culmination of his farming quest. The only real situation was its incompleteness. The clear mental wall structure that carried a faint scarlet color darkened as the strengthened mental health strength crashed in their floor and fused with the textile. Noah’s thoughts experienced reached the ninth get ranked, though the ethereal heart of energy didn’t recognize that amount. The ethereal center of energy was artificial, so his dantian, physique, and black spot sensed that a thing was off whenever they obtained use of its s.p.a.ce. They had the inborn urge to pack that s.p.a.ce, in addition they knew their time obtained still into the future. V.Gfiction Birth of the Demonic Swordblog – Chapter 1902 1902. Second phase victorious help suggest-p1 Novel–Birth of the Demonic Sword–Birth of the Demonic Sword
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00739.warc.gz
CC-MAIN-2022-40
10,118
40
http://www.meetup.com/NYC-Data-Business-Meetup/members/143513842/
code
New York, NYUSA May 27, 2014 Have been in analytics, mainly healthcare, for 20+ years. Tremendous change opportunities with emerging technology/data streams to build tools/processes/apps to affect better healthcare Pfizer, inVentivHealth, Core Access Group, WebMD - VP, SVP Data Science, Analytics Healthcare data science professional with 20+ years experience at WebMD, Pfizer, inVentiv Health, CORE Access Group spanning product, media, web, managed care, new product analytics.
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006925.85/warc/CC-MAIN-20151001222006-00046-ip-10-137-6-227.ec2.internal.warc.gz
CC-MAIN-2015-40
480
5
https://globalcitizenschs.wordpress.com/about/bios/meet-the-students-2015-2016/sahtiya-hosoda-hammell/
code
Hi everyone! I am a doctoral student at the University of Virginia and this is my 3rd year working with this class at CHS. I always find the question “What do you need to know to understand me?” really difficult to answer. I feel like some people know ALL the facts about me , like age, ethnicity, education, etc, but still fail to understand me; others know very little information about me, but really get me because they have read something I wrote or heard me speak on a topic that resonates with me. I guess that is a long way of saying that to understand me, you have to know that I care a lot about the society I live in and education; That is why I am always trying to make things better wherever I go :). I am big on reading and thinking and discussing, so I am constantly changing and growing. I am looking forward to learning a lot from the class this year!
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00197.warc.gz
CC-MAIN-2020-29
872
2
http://www.phpdeveloper.org/news/22876
code
Matt Stauffer has posted a set of helpful hints for developers using Sublime Text (3) to help make them more efficient and writing code much easier. A lot of folks in the PHP community have been checking out PHPStorm lately, including myself and most of the developers I work with. We love the code intelligence we get from PHPStorm, but still miss the speed, quick boot-up, and convenience of Sublime Text. Before I blindly assume PHPStorm is the only way to go, I wanted to see: Can I bring the things a PHP-focused IDE provides PHP developers back to Sublime Text and get the best of both worlds? He starts with a list of "must haves" for him to be able to move from PHPStorm, features it provides that Sublime, an editor not IDE, might not come with out of the box. Most of his suggestions use the Package Control functionality in Sublime so you'll need that installed to try out his examples. He then shows several tools you can install including: - Sublime PHP Companion (package) - AllAutocomplete (package) - Cmd-click for function definition - Integrating Code sniffing and PHP_CodeSniffer - DocBlockr (package) - Git helpers ...and many more. If you're a Sublime Text user, definitely take a look at his list and see if you can find something to help make your development easier.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646937.1/warc/CC-MAIN-20230531150014-20230531180014-00250.warc.gz
CC-MAIN-2023-23
1,290
10
http://www.nvnews.net/vbulletin/showpost.php?p=1319564&postcount=5
code
Originally Posted by CaptNKILL Only on a pre-encoded signal though. Like if you set your sound card to pass through AC3 streams when playing DVDs. In games (and everything else) you'd get only stereo from an X-FI using any kind of digital connection. Incorrect, if I interpretted his post correctly. You would get db ProLogic surround. You could get this with analog cables as well, but the signal is far cleaner if he went digital. Then again, maybe I'm misunderstanding his post entirely.
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986451.45/warc/CC-MAIN-20150728002306-00091-ip-10-236-191-2.ec2.internal.warc.gz
CC-MAIN-2015-32
490
5
https://blog-sociology.livejournal.com/248751.html?nojs=1
code
Does anybody know, how we call bloggers, that have very many friends (1000 an more)? Some of them are famous, some just add many people, hoping that than that people will add them and they would have many friends. In Russian that is called "Тысячник" . But, I believe, there should exist also English word! Help me, please! This is for our investigation of Russian LJ:)))))) Thanks a lot!
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00793.warc.gz
CC-MAIN-2018-13
396
2
https://direct.mit.edu/isal/proceedings/alife2018/30/444/99631
code
Lindenmayer systems (L-systems) are a formal grammar system that iteratively create new strings from previous strings by rewriting each of its symbols in parallel according to a set of rewriting rules. The symbols in the string sequence produced can be taken as instructions to produce a visualization of a process over time. They have been especially useful for creating accurate simulations of plants. The Lsystem inductive inference problem is the problem of inferring an L-system that initially produces a given sequence of strings. Here, a new tool to solve this problem, PMIT-D0L is introduced, that combines projected solutions with linear diophantine equations, heuristics, and genetic algorithm. PMIT-D0L was validated using 28 previously developed deterministic context-free L-systems of different complexity, and it can infer every L-system in the testbed with 100% success rate in less than 4 seconds, a significant improvement over existing implemented tools. This research was supported in part by a grant from the Plant Phenotyping and Imaging Research Centre.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00618.warc.gz
CC-MAIN-2023-40
1,075
2
https://www.richardcleaver.com/tag/apple/page/8/
code
I found this great website called Presentation Zen authored by Garr Reynolds. There are a couple of interesting posts. One on the presentation style of Bill Gates and the other which contrasts the visual approach used by Bill Gates and Steve Jobs. Reynolds’ view is that we can use many of the concepts in Zen and Zen aesthetics to compare their presentation visuals and gain some insight as to how to improve our own visuals. Some great ideas. And some really amusing differences in presentation styles. Jobs is not afraid to use blank slides as a way to enhance his presentation. And he keeps his arms wide open and his body language is natural. Gates typically uses dense Powerpoints. His body language does not come across the same way. The two pictures convey the difference.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00795.warc.gz
CC-MAIN-2023-50
782
2
https://multiphasicapps.net/doc/tip/assets/developer-notes/stephanie-gawroriski/2018/04/03.mkd
code
Still not sure what to do for widgets. I think probably the base thing to do would be to have base widgets that are then final which then have sub-widgets which do the actual native stuff. It just seems a bit nasty. But the best way for sub-widgets would if they were interfaces and not classes. Then this would So this is still complex and rather confusing because whatever method will end up being complex and rather a pain to use. The only thing that would be the simplest to do is if I just directly used a canvas and performed all the drawing of widgets and such on the local end. Basically instead of doing things on the server, all the server would do is send a size to a client and such. Then the client can send an updated buffer to the server. There could also be event queues and such. Then there would just be a single buffer push from the server and such, treating everything as a bunch of pixels. It would work on every system although that would completely remove accessibility and native navigation when it came to widgets and such. It would completely break on that. It also would not match the native feel of the OS which I still would rather want. So what I have now is rather nice but the base class when more widget types are involved are a complete mess. I really do need native support because that simplifies accessibility and integration of SquirrelJME. Perhaps I just derive sub-classes for every type of widget and some actions will just be duplicated and such. What I can do now though is move off the callback code from widgets to their own class, that would make it easier to use for the most part. Yes that can work, sub-widgets that are specialized. Just instead of sub-widget I suppose they could be called natives. Perhaps actually just local things which are then given handle information as needed. So basically there will just be like a LocalCanvas which is an interface that is initialized and given a handle to act upon locally LcdLocalCallback for any events that happen on it and such. Then all of the LcdWidget stuff will not be extended. Just that it will use a generic type of sorts to show which interface is used for the sub-class. Basically every collectable would be created with a native resource and such so that would be derived for everything, including menus perhaps. Menus would be complex a bit because the widget would need to build them up. I can actually re-implement the server without changing any client code since it works equally for the most part. So since the old code would be rather messy to correct to the changes like this, I will just write up completely new code to handle the stuff and such. Will completely change the design so that collectables of certain types have collectable resources and such. I have to design it with everything in mind to where it would be the least hassle to do things. Well, I can probably keep it. I just need a kind of adaptable thing like a menu viewer and a ticker viewer and such.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476399.55/warc/CC-MAIN-20240303210414-20240304000414-00598.warc.gz
CC-MAIN-2024-10
2,985
20
https://myoutfitonline.com/products/total-bomb-top-black
code
THIS SIZE IS SOLD OUT, Sign up for restock notifications! Elevate the basics in your wardrobe! Our one shoulder crop top beautifully highlights your midriff, fully lined for a comfortable fit! Pair with our Credit Me Belt and Hot Girl Jeans for a night out. Model is 5'7 and wears size S with our Natalia Joggers. Fast Shipping / Easy Returns. TAKE NOTE - THESE SIZE CHARTS ARE TO BE USED AS A GUIDE ONLY.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00278.warc.gz
CC-MAIN-2020-05
405
5
https://help.britecore.com/hc/en-us/articles/8723587393043-Add-a-contractor-to-a-claim
code
You can add a contractor in both the Contacts and Claims modules. To add a contractor in the Contacts module, see Add a contractor. A contractor will only be attached to a specific claim if you add the contractor to the claim in the Claims module. To add a contractor to a claim, open the desired claim and: - Select Contacts from the claim menu. - Navigate to the Contractor section of the Contacts screen. - Select + Add a Contractor. - In the Name box, type the name of the contractor. - If the name you type matches the name of an existing contractor in BriteCore, a list of possible matching contractors will appear below the Name box. Select the correct name. - If the name you type matches the name of an existing BriteCore contact, after you’ve typed the name and pressed Enter/Return on your keyboard, a Did You Know? dialog box will open with a list of possible matching contacts. Select the correct name from the dialog box. If you can’t find a matching name, select Create New Contact. See Complete the New Contact dialog box for an individual or Complete the New Contact dialog box for an organization. - If the name you type doesn’t match a name that exists in BriteCore, when you press Enter/Return after typing the name, the New Contact dialog box will open. See Complete the New Contact dialog box for an individual or Complete the New Contact dialog box for an organization. - In the Description box, type a brief description of what the contractor is being contracted to do. Figure 1: The Contractor Name and Description boxes. - To add an additional contractor, select + Add a Contractor and repeat steps 4 and 5. To view additional contact information about the contractor, select the blue person icon. When you add or select a contractor’s name, BriteCore displays the contractor’s name and contact information in the Contractor section.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00510.warc.gz
CC-MAIN-2023-50
1,869
13
https://answers.netlify.com/t/sign-up-works-but-no-users/2784
code
I am having a weird issue and I’m not sure if it’s designed that way but here goes: I want to use Identity for a small website, and have implemented signup and login by following the gotrue-js docs. Both methods seem to work fine, as in, I get no errors and a response that matches my expectation. So, I can sign up and then login with those same credentials just fine. However, none of the users from my testing are showing up in Identity and when I go to settings, the “Active users” count is still 0/1000. Is there something I’m missing or shouldn’t those users show up on app.netlify.com -> Identity or at least count towards my 1000 active users quota? Edit: A few pieces of information upfront: - Registration is set to “open” Autoconfirmis turned on - I am making the requests from local dev environment
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00520.warc.gz
CC-MAIN-2021-17
826
9
https://www.halock.com/tag/technology/
code
Hackers are relentless adversaries who incessantly create new tools and methodologies to take advantage of known exploitable vulnerabilities within networks. When someone says “you have malware”, what do you think of? Do you remember the “old days” when a virus was simply an annoyance, blue screening Windows machines, slowing your machine speed, or popping up false firewall advertisements? Unfortunately, those “old days” are long gone. Malware has changed drastically in recent years.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473472.21/warc/CC-MAIN-20240221102433-20240221132433-00017.warc.gz
CC-MAIN-2024-10
500
2
https://blender.stackexchange.com/questions/161501/world-grid-is-visible-in-render-how-to-remove
code
I've been trying to find a solution for quite some while and I couldn't find one so I decided to make a post here. I've been trying to learn motion tracking with blender and I got the track and I attach it to a image object. I then add a background image to the camera and the the image object was moving in front of the background image. Everything was great until I pressed render and guess what? The world grid was IN my render. Is there a way to have a camera background image and not have the grid on top? I tried to but the background image in depth: front but then the image object was behind the background image.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00278.warc.gz
CC-MAIN-2020-16
621
2
http://www.tomshardware.com/answers/id-2079136/long-beep-1short-beep-ram-upgrade.html
code
CPU : Pentium(R) Dual-Core CPU E66000 @ 3.06GHz 3.07GHz RAM : 2GB HP memory (i dont know the how many MHz) MOBO : its a LGA 775 I have a 2x4GB @1333MHz Corsair Value Select Memory and 2x2GB @1333MHz Corsair Value Select Memory. both of them wont work. it only give me some beep codes (1 long beep followed by 1short beep) i have many computer parts on my shop but all of them will not work I have a thread about this but i think my account was bugged all of my notifications and private message were all gone Please reply ASAP Sorry for my BAD English Thanks!! God Bless!!!
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661311.64/warc/CC-MAIN-20160924173741-00299-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
573
8
https://schumacher.sh/2016/03/08/accelerating-mysql-file-imports.html
code
Accelerating MySQL file imports Just a quick note on how to speed up the slow process of importing SQL files on Linux Systems. I wanted to import a 50 MiB MySQL dump (and monitor it using pv), which just took way too long. pv database.sql | mysql -u user -p database Enter password: 148KiB 0:00:13 [13.8KiB/s] [> ] 0% ETA 1:13:21 160KiB 0:00:14 [13.8KiB/s] [> ] 0% ETA 1:13:03 228KiB 0:00:19 [12.5KiB/s] [> ] 0% ETA 1:09:29 I’m importing a 50 MiB file on a SSD here, so that can’t be right. Did I mention pv is great? Anyway, the solution is to disable autocommit mode, which performs a log flush to disk for every insert. More information here. Just open the SQL file with any text editor and add these statements to the very top and very bottom of the file (in nano, use CTRL + w + v to jump to the bottom): SET autocommit=0; [content] COMMIT; That’s it. SQL imports should now be finished in no time. 49.1MiB 0:00:52 [ 960KiB/s] [=========>] 100%
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00694.warc.gz
CC-MAIN-2022-49
955
9
https://www.mycouponcodesindia.com/offer/get-10-off-on-raksha-bandhan-sale-at-beardo/
code
Get 10% off on Raksha Bandhan Sale at Beardo. Hurry! Grab the order now!. T&C apply TIME LEFT - LIMITED OFFER! 12% off on Beard Growth Oil at Happilyunmarried 20% off on Shahnaz Husain Skin care Products at Aplava Input email of your friend. Input optional message for your friend.
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157203.39/warc/CC-MAIN-20180921131727-20180921152127-00480.warc.gz
CC-MAIN-2018-39
281
6
https://forum.freecodecamp.org/t/radio-label-html/423506
code
I am entering code exactly the way the lesson describes, and it keeps failing me for Each of your two radio button elements should be nested in its own I don’t get it. This is what I was inputting: <input id="indoor" type="radio" name="indoor-outdoor"> <input id="outdoor" type="radio" name="indoor-outdoor"> It is nested correctly in the form. I don’t understand why it is not passing… please share the challenge link and your whole code for better help I think all the challenges in that part of the curriculum request the input element nested in the label. There should be an example in the challenge description I’ve edited your post for readability. When you enter a code block into a forum post, please precede it with a separate line of three backticks and follow it with a separate line of three backticks to make it easier to read. You can also use the “preformatted text” tool in the editor ( </>) to add backticks around text. See this post to find the backtick on your keyboard. Note: Backticks (`) are not single quotes (’).
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00322.warc.gz
CC-MAIN-2022-21
1,051
14
https://insurgency.fandom.com/wiki/Spotting
code
Spotting is a concept in Insurgency. Spotting allows a player to call out enemies on the battlefield, and in some cases reveal them to other players. The spotted enemies appear with as a triangular red marking in the cardinal direction called out by the player. - Previous versions of the game had the player call out in the same manner, but had the enemy appear as a square red marker on the player and his teammates' HUD, as is shown in the image to the right. Spotting was done by pressing (not holding) the radial menu button while looking at an enemy. Note that the square red marker was a 2D marker, and not a 3D marker, thus, it would not move from its initial spotting location.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583668324.55/warc/CC-MAIN-20190119135934-20190119161934-00197.warc.gz
CC-MAIN-2019-04
686
3
https://www.runscope.com/about
code
Gracie self-identifies as a lifelong learner, an avid outdoors person, and a software engineer. Her path to programming was a rambling one, which included a music degree, a lot of open courseware, and an all-girls engineering fellowship called Hackbright Academy. She's a big fan of developer tools and services that help her make sense of the inscrutible. When she's not poking at some technology, you might find her romping outside, strumming a guitar, or ogling someone else's dog and wishing it were her dog. Neil is a software developer, platform evangelist and loves a good hackathon. He most recently worked at Intel Mashery, where he was the lead evangelist and managed developer platform partnerships. Prior to Mashery, Neil has founded several Web and software companies in industries ranging from search, e-commerce to real estate and healthcare. He is the co-creator/maintainer of I/O Docs Community Edition, an open-source interactive API documentation tool. Audrey is a self-professed organization junkie and problem solver. She enjoys bicycling and logic puzzles, although, not at the same time. She has admittedly taken a circuitous path through the tech realm where she has performed roles in technical documentation/training, software QA, and technical solution implementation. At this stage in her career, her main focus is having an impact on building practical developer tools.
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989507.42/warc/CC-MAIN-20150728002309-00195-ip-10-236-191-2.ec2.internal.warc.gz
CC-MAIN-2015-32
1,398
3
https://public-inbox.org/git/[email protected]/
code
From: Johannes Schindelin <[email protected]> To: Denton Liu <[email protected]> Cc: Git Mailing List <[email protected]>, Junio C Hamano <[email protected]>, Philip Oakley <[email protected]>, Elijah Newren <[email protected]>, Viresh Kumar <[email protected]>, Subject: Re: Deprecating git diff ..; dealing with other ranges Date: Mon, 11 Mar 2019 14:19:21 +0100 (STD) [thread overview] Message-ID: <[email protected]> (raw) On Mon, 11 Mar 2019, Denton Liu wrote: > I was in the process of deprecating `git diff <commit>..<commit>` as > discussed here. However, I ran into a weird case that I'm not sure > how to deal with. > In t3430-rebase-merges.sh:382, we have the following test case which > invokes git diff: > test_expect_success 'with --autosquash and --exec' ' > git checkout -b with-exec H && > echo Booh >B.t && > test_tick && > git commit --fixup B B.t && > write_script show.sh <<-\EOF && > subject="$(git show -s --format=%s HEAD)" > => content="$(git diff HEAD^! | tail -n 1)" > echo "$subject: $content" > test_tick && > git rebase -ir --autosquash --exec ./show.sh A >actual && > grep "B: +Booh" actual && > grep "E: +Booh" actual && > grep "G: +G" actual > It gets caught in my attempt to only deprecate ..'s. Technically, it's > undocumented behaviour and it only happens to work because git-diff > accept ranges but it doesn't operate in an intuitive way. I beg to differ. `git diff <commit>^!` does exactly what I want: it shows the diff between the commit's first parent and the commit. And I would not necessarily call this a "range". It is a short-hand. Can't you allow short-hands in `git diff`, and disallow only ranges that have an explicit `..` in them? You might need to record that somewhere, but I think that should be easy enough. > I was just wondering what we should do about this case? Should we > deprecate all invocations of `git diff <range>` except for the special > case of `git diff <commit>...<commit>`, or should we _only_ deprecate > `git diff <commit>..<commit>` and allow all other forms of ranges, even > though it was undocumented behaviour? > : https://[email protected]/ next prev parent reply other threads:[~2019-03-11 13:21 UTC|newest] Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-03-11 9:37 Deprecating git diff ..; dealing with other ranges Denton Liu 2019-03-11 13:19 ` Johannes Schindelin [this message] 2019-03-11 15:34 ` Elijah Newren 2019-03-12 7:17 ` Junio C Hamano 2019-03-12 17:24 ` Andreas Schwab 2019-03-12 21:01 ` Ævar Arnfjörð Bjarmason 2019-03-13 7:01 ` Johannes Sixt 2019-03-18 17:07 ` Elijah Newren 2019-03-18 17:11 ` Michal Suchánek 2019-03-18 18:51 ` Elijah Newren 2019-03-18 17:59 ` Ævar Arnfjörð Bjarmason 2019-03-13 1:20 ` Duy Nguyen 2019-03-13 18:12 ` Andreas Schwab 2019-03-12 7:22 ` Junio C Hamano You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: List information: http://vger.kernel.org/majordomo-info.html * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link Be sure your reply has a Subject: header at the top and a blank line before the message body. Code repositories for project(s) associated with this public inbox This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510208.72/warc/CC-MAIN-20230926111439-20230926141439-00018.warc.gz
CC-MAIN-2023-40
3,724
78
https://moz.com/community/q/user/georgejohn727
code
The error "Duplicate without user-selected canonical” indicates that Google found duplicate URLs that are not canonicalized to a preferred version. Google didn't index these duplicate URLs and assigned a canonical version on its own. How to fix this issue Should these pages even exist? If the answer to this is no, simply remove these pages and return a HTTP status code 410. If these pages have a purpose, then ask yourself whether they carry any value: If yes, then canonicalize them to the preferred version of the URL. Need some inspiration where to canonicalize to? See which URL Google finds most relevant using the URL Inspection tool(opens in a new tab). If Google's listing PDF files for your site, canonicalize them through the HTTP header. If these pages don't carry any value, then make sure to apply the noindex directive through the meta robots tag or X-Robots-Tag HTTP Header.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00582.warc.gz
CC-MAIN-2022-05
894
7
http://www.gamefaqs.com/boards/954437-league-of-legends/65039686?page=2
code
added way too much fluff to make yourself seem better. Just say "im bad" its alot easier. When I queue up with gold friends I hold my own and do decently though, that has to count for something :c woah, how do you manage against GOLD My bad I forgot everyone on these boards were platinum for a second, silly me :c --- Eh #24casperrulezPosted 12/29/2012 1:29:47 AM i dont play this game --- ThisIsOchoa anywhere it matters Nothing I post is relevant. Ever. #25XcaIIionPosted 12/29/2012 1:35:51 AM because i main support --- Ahri is mai waifu Join the glorius evolution! #26Skul_Posted 12/29/2012 1:38:29 AM I intentionly feed if anyone on my team has an e or an i in their name. --- [Consoles] are essentially just really low end PC's with child locks.-DaLaga i5-2500-GTX550TI-8GB; DSi; 3DS; GBASP; Wii;Galaxy SIII; iPod Touch 4th Gen; #27Master_strikerPosted 12/29/2012 1:45:39 AM Went from 1300 to 1000 because i had 5 d/cs in a row and raged and got intona loss streak. Slowly climbing out. --- yooooooo #28Reaper_MinionPosted 12/29/2012 1:55:00 AM 1300 right now, because I'm bad and get stuck with badder teammates. --- /|\ http://i48.tinypic.com/2pzcsv7.jpg \|/ http://www.youtube.com/watch?v=vE7x0WLz7SY #29Mystic__HatePosted 12/29/2012 2:01:16 AM My mundo doesn't carry hard enough. --- necroix05 posted...As balanced as my breakfast. but I only had the cereal #30MechaNapoleonPosted 12/29/2012 2:45:51 AM I'm low because I only played a couple games outside of beta.
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424937406179.50/warc/CC-MAIN-20150226075646-00031-ip-10-28-5-156.ec2.internal.warc.gz
CC-MAIN-2015-11
1,475
18
https://foreignaffairs.co.nz/2020/10/15/mil-osi-video-this-is-earth-in-the-next-100-years-if-we-dont-act-on-climate-change-ways-to-change-the-world/
code
Source: World Economic Forum (video statements) The impact of human activity on climate is becoming more and more visible each year. If we do nothing to change our current way of life, this is what we can expect from our planet in 100 years. The World Economic Forum is the International Organization for Public-Private Cooperation. The Forum engages the foremost political, business, cultural and other leaders of society to shape global, regional and industry agendas. We believe that progress happens by bringing together people from all walks of life who have the drive and the influence to make positive change. World Economic Forum Website ► http://www.weforum.org/ Facebook ► https://www.facebook.com/worldeconomicforum/ YouTube ► https://www.youtube.com/wef Instagram ► https://www.instagram.com/worldeconomicforum/ Twitter ► https://twitter.com/wef LinkedIn ► https://www.linkedin.com/company/world-economic-forum TikTok ► https://www.tiktok.com/@worldeconomicforum Flipboard ► https://flipboard.com/@WEF #WorldEconomicForum #ClimateChange #UrgentAction
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00538.warc.gz
CC-MAIN-2021-04
1,078
12
https://lists.nongnu.org/archive/html/lwip-users/2014-05/msg00045.html
code
I'm using lwip 1.4.1 in an embedded project (PowerPC) very successful. Now I'm trying to add lwip to the target simulation project (Win32 Visual Studio 2013). The build fails with conflicts like: error C2011: 'in_addr' : 'struct' type redefinition lwip\src\include\ipv4\lwip\inet.h error C2011: 'sockaddr_in' : 'struct' type redefinition lwip\src\include\lwip\sockets.h This types are already defined in winsock.h How can I fix this problem? Thanks for any hints!
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375439.77/warc/CC-MAIN-20210308112849-20210308142849-00017.warc.gz
CC-MAIN-2021-10
463
6
http://weatherhead.case.edu/faculty/Mohan-Reddy/
code
Mohan Reddy’s interests are along two dimensions. The first is focused on how non-market institutions (professional societies, trade associations) influence the adoption and diffusion of new technologies. A second area of interest understands the dynamics of how public goods (social goods) are created through private(corporate) interests. PhD, Case Western Reserve University, 1985 MBA, Case Western Reserve University, 1977 BS, Mysore University, 1975
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115855094.38/warc/CC-MAIN-20150124161055-00084-ip-10-180-212-252.ec2.internal.warc.gz
CC-MAIN-2015-06
456
4
https://javarefresh.com/corejava/thread.php
code
Computer system normally has many active processes and threads; true for single execution core as well. Processing time for a single core is shared among processes and threads through an OS feature called time slicing. Thread lightweight processes Thread has 2 strategies - Directly control thread creation, management each time by the Abstract thread management by passing the application's tasks 2 ways - implements Runnable interface and extends Thread class Mostly implements Runnable interface used because can do subclass, more flexible, used in high-level thread management. Thread.sleep period can be terminated by interrupts Thread.interrupted() - returns true if an interrupt has been sleep, join methods are dependent on the OS for timing, so you should not assume that join will wait exactly as long as you InterruptedException - By sleep, join methods. Synchronization prevent thread interference(Thread A's result is lost, overwritten by Thread B) and memory consistency errors Happens-before relationship is simply a guarantee that memory writes by one specific statement are visible to another specific final fields which can not be modified after the object is constructed, can be safely read through non-synchronized methods, once the object is constructed. Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. Every object has an intrinsic lock associated with it. Reentrant synchronization - allowing a thread to acquire the same lock more than once enables. Atomic action - can not stop in the middle, either happens completely or doesn't. Also no side effects of atomic action are visible until the action is complete. volatile keyword for all variables(long and double) do atomic action(Reads and writes). Changes to a volatile variable are always visible to other threads(due to happens-before relationship) Starvation - situation where a thread is unable to gain regular access to shared resources due to this unable to make progress. Livelock - e.g. Thread often acts in response to the action of another thread. If other thread's action is also a response to the action of another thread or goes one.... How to create Immutable Objects? (likely thread env) If object state cannot change after it is constructed then called as Immutable Objects. Don't provide "setter" methods Make all fields final and private. Declare the class as final. Make the constructor private and construct instances in factory Don't provide methods that modify the mutable objects Don't share references to the mutable objects, never store references to external, mutable objects passed to the constructor. If require create copies and store references to the copies. Also create copies of your internal mutable objects when necessary to avoid returning the originals in your methods. ThreadMXBean management interface for the thread system of the Java virtual machine. Based on that you can get the Thread CPU time, thread Contention Monitoring, Synchronization Information and Deadlock Detection etc Advantage of Multithreading Increase performance of a program/Task Advantage of Multithreading Sometime unexpected results Increased complexity of the code Adding the overhead of thread creation. Best Practices of Multithreading Avoid using static objects - Use local variables - Each thread that accesses the method would have their own copy of objects - this would reduce the concurrency Use locking judiciously - preferable to take advantage of the lock over synchronized keyword
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00607.warc.gz
CC-MAIN-2021-39
3,530
68
https://www.ozzu.com/questions/30388/how-to-fix-unmountable-boot-volume
code
I was surfing the Internet one day when my computer jammed up. I restarted it, and when Windows was loading, a blue screen popped up saying unmountable boot volume and a few different options appeared. I tried to run in safe mode but the same problem again occurs. What on earth should I do next? 0I am having this same problem and get all the way to my recovery cd blue screen and on the bottom, in white, it says "examining 238418 MB Disk 0 at ID 0 on bus 0 on iaStor..." but it never goes anywhere. I have left it run like this overnight thinking that maybe it takes time but I have had no progress. — sweetcav
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00735.warc.gz
CC-MAIN-2023-14
615
3
http://eab.abime.net/showpost.php?p=116115&postcount=28
code
They were in full control of the chipset, so it would have been easy, just the solution I mentioned is not trivial. They always thought along the lines of how the sw people make c2p work and replicate that in very cheap hw, instead of doing an easy and dedicated hw mod. Turning off p2s hw does not involve anything off the shelf, just gating a few lines, plus some logic. A framebuffer could be off the shelf, and was very cheap that time, they did not do that either. They simply did not see how big this hw gap is going to grow, so they ignored the demand almost completely.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.65/warc/CC-MAIN-20161020183838-00163-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
577
5
https://help.salsalabs.com/hc/en-us/articles/223341427-Common-Merge-Fields
code
When to use it Merge Fields are commonly used to personalize content on forms and in email blasts. You can use merge fields on all Salsa forms and emails to improve your email metrics (opens, click-throughs, and conversions). You can insert merge fields into email blasts: And you can insert merge fields into forms: Add Merge Fields to Emails Just insert one or more Salsa merge fields into a trigger's subject, html content, and/or text content using the syntax [[merge_field]] where you would intend to place its corresponding content. For instance, an email that uses the [[First_Name]] merge field might begin: This would generate an email to someone who has signed up with the first name of Kristen as: Add Merge Fields to Redirect Forms Redirect Forms are the forms supporters are redirected to after they complete an action on an initial form. A few examples: - The thank you page a supporter lands on after he or she makes a donation - The tell a friend page a supporter lands on after he or she signs an online petition Personalize the content on these redirect forms following these steps: Add merge fields to the redirect form Copy the public URL View the public page... ... and then copy the public URL Format the Redirect Go to the Follow Up tab of the original form (in this case a donation form) Paste the URL (copied above) in the Redirect to field. Manually append the syntax and any merge fields onto the end of the URL you've pasted. Find All Available Merge Fields You can insert merge fields into subject lines and forms, but first you have to know what the merge fields are. There is no single, comprehensive list of all merge fields available for both of the following reasons: - Available merge fields differ with different types of Salsa pages - Custom fields vary by Salsa account To find the merge fields on any given page, set up an Email Autoresponse and click the Append Available Values checkbox on the Advanced Options tab. Add the autoresponse to the page in question (i.e. the donation page or the online petition) and follow the instructions in the Autoresponses documentation to send yourself an email with all of the merge fields available on that page. Common Merge Fields To save you the step of identifying all potential merge fields on a form (see above), the following is a list the most frequently used merge fields. They are case-sensitive. If you use the common merge field [[First_Name]] and the First Name field in a supporter record is blank, what should display to that supporter? Should the form or email blast say Dear Supporter? Should it say Dear Friend? You can customize this content by updating default merge values. Salsa's standard supporter fields are usually the capitalized word or phrase exactly as it appears (with underscores for spaces) in your Salsa headquarters or on an end-user form. So, [[City]] for City, and [[First_Name]] for First Name. Here are the most frequently used Salsa supporter fields: - [[Email]] User for signup confirmation messages - [[State]] provides the standard two-letter postal abbreviation for American states and Canadian provinces, e.g., CA rather than California - [[StateName]] provides the full state name of a supporter's address, e.g. Texas rather than TX - [[Country]] provides the full text name of the country, e.g., United States or Brazil. Custom fields will use the same double-bracket syntax as regular supporter fields; the field name within those brackets will be the API Name you assign to your custom field. If you're not sure of the API Name for one of your custom fields, go to the Supporter Package and click on Manage Custom Fields. From there, click on the edit link of the custom field in question to find the API Name as shown above. These fields will generally return the database value of the user's response. For instance, a merge field [[religion]] on a custom field which is configured as a VARCHAR field -- a single line of text the user can enter -- might return Anglican if that's what the user types in. However, if it's configured as a RADIO field -- where you provide the user a limited picklist of options -- and "Anglican" is associated with a response whose database value is 1, then 1 will be provided by the merge field. In addition to the regular supporter fields, donation pages generate a variety of donation-specific merge fields useful for donor receipts. - [[amount]] The total amount of the gift or transaction - [[Transaction_Date]] The date and time when the donation was processed. - [[CURRENT_DATE]] offers a more reader-friendly formatted date (e.g., January 13, 2011 instead of the Salsa format of 2011-01-13 15:42). - [[PNREF]] A unique alphanumeric code returned by the merchant gateway for each transaction. Handy for use as a "confirmation code" in a reply autoresponse. - [[RPREF]] A unique alphanumeric code returned by the merchant gateway for a new recurring donation. Handy for use as a "confirmation code" in a reply autoresponse. - [[donation_KEY]] Returns the numeric donation KEY value associated with the donation in Salsa; can be used as a transaction ID number. - [[donate_page_KEY]] Returns the primary key for the donation page that was used for this donation. - [[Credit_Card_Digits]] Returns a securely obfuscated card value such as 4111xxxxxxxx1111 useful for receipts. - [[Credit_Card_Expiration]] Returns the credit card's expiration in MM/YY format, e.g., 03/12. - [[Tracking_Code]] The tracking code associated with the gift. Since this will be your own organizational code, it is probably most useful for internal notifications rather than acknowledgments to the donor. - [[PAYPERIOD]] The frequency of a recurring donation, e.g. monthly, quarterly, annually. Caution: The frequency is returned as a code and not as a useful description. The mapping of codes to payperiods follows. - WEEK - Weekly - MONT - Monthly - QTER - Quarterly (every 3 months) - YEAR - Yearly - [[TERM]] The number of times the above pay period will run. E.g. if the pay period is monthly, and the term is 6, then the donor will be charged every month for six months. if [[TERM]] is 9999, then the recurring donation will continue forever or until the credit card expires. - [[Start_Date]] The starting date of the recurring donation. Subsequent donations will be processed as close as possible to this date (depending on the pay period chosen). - [[recurring]] A flag indicating whether or not the donation was recurring. If the value is one then the donation was a recurring donation. Otherwise the value is zero. - [[recurring_donation_KEY]] the database key for a recurring donation. If [[recurring]] is one, then this is a recurring donation database key. if [[recurring]] is zero, then the value is -1. - [[Order_Info]] A human-readable text itemization of the entire order, very useful both for purchaser receipts and for internal fulfillment notices - [[name]] Shipping Name Actions can generate fields for the subject line and content of the message launched by the action. These merge fields have names that are specific to each set of content, so if you want to include the text of the action message sent in an acknowledgment email, that acknowledgment will need to be specific to one and only one action page. - [[Content1111]] Provides the body of the action message (but not the subject) sent by the user, where 1111 is the key number of the specific content the user is sending. - [[Subject1111]] Provides the subject of the action message (but not the body) sent by the user, where 1111 is the key number of the specific content the user is sending. The key number will not be the key number for the action itself. To find the key number of the subject and content section, use Firebug or another similar web development tool on the active public facing page. When you do so, you should see something like what you see here: Free RSVP events generate the regular supporter registration fields. Events with paid tickets place their transactions on Salsa's Donation table, and those transactions generate all the same donations merge fields, such as [[amount]] and [[PNREF]]. In addition, all events (both paid and unpaid) have these merge fields available: - [[event_KEY]]The unique key number of the event registered for. This can be used to include a link back to the event as a reference for the registrant -- especially if the same trigger will be used for multiple different events. Use this format: - [[Additional_Attendees]] For events permitting guest registrations, the number of additional guest attendees beyond the main registrant. A distributed event's create-an-event page generates merge fields for each unique event your supporters create through it. These fields can be used to customize an acknowledgment email that includes event details. - [[event_KEY]]The unique key number of each event created through the distributed event page. This can be used to send your event host a link to her or his event - The event's [[Start]] and [[End]] dates
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657170639.97/warc/CC-MAIN-20200715164155-20200715194155-00199.warc.gz
CC-MAIN-2020-29
9,027
72
https://uk.wix.com/marketplace/wix-partner/beauvais-designs
code
7 лист. 2018 р. Melissa was wonderful to work with. She was quick to answer my messages and emails, communicated well with me, and did exactly what I asked. My only issue was with Wix and that it sometimes would not do things the easiest way. Melissa, however, took the initiative to contact Wix and also Paypal to figure out the best ways to do the challenging things that were important to me. Melissa really helped to relieve some of the stress I was going through with my workload. She knew how to do things that would have taken me away from my photography work to figure out. We were able to get my ISO going quickly, and to get my Christmas ad working smoothly including sending out mailers. I would highly recommend Melissa, and I'll ask for her help in the future....
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00676.warc.gz
CC-MAIN-2023-14
781
4
https://adaptivepixel.weebly.com/blog/weekend-challenge-14-modelling-upgrades
code
This weeks weekend challenge was to focused on continuing the modelling of the bedroom. This was the plan but I had computer issues that had to be sorted first. You see, my motherboard that has slowely been dying for the last few months finally gave up the battle and so had to be replaced. Alas it did not go down alone. It managed to corrupt part of my Windows 7 operating system so that needed to replaced as well. So after about a day and a half of stripping my machine (it has a lot of components), cleaning it out (dust everywhere), replacing the motherboard, putting back all the components, installing Windows 8.1, installing all the drivers and utilities, and finally reinstalling all my software, I finally have a working computer again. YAY. So after all that I finally got down to do some modelling. Due to the amount of time working on my computer, I didn't have a lot of free time to work on this so I didn't get much done. What I did get done was to build both of my monitors (missing some wires and logos), a keyboard, shelve brackets as well as the shelves on the other side of room, TV set, and some books. I also changed the wall colour to reflect the one I have at the moment. Hi, I'm Claude, head & creator of Adaptive Pixel. You might be wondering what brought you here and I'm here to tell you, it was a link. Here is where I will be posting what I'm currently doing and challenges I've done. Click Here to get an extension so RSS works in Google Chrome
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00328.warc.gz
CC-MAIN-2018-30
1,476
5
https://www.molecularecologist.com/2015/09/02/environmental-association-analyses-a-practical-guide-for-a-practical-guide/
code
Obtaining extensive SNP data for your organism of choice isn’t such a feat these days, but actually matching that breadth of data with appropriate analyses is still a challenge. Fortunately, there has been an avalanche of new methods to make these connections between genetic variation and environment more clear. Unfortunately, the recent surge in new methodologies sure makes decision making tough. What are the drawbacks of different methods? What works for my data? Why didn’t I think of this before I generated all these SNPs? If only there was some sort of….practical guide. That’s the goal of a new article by Christian Rellstab and colleagues that has been recently accepted to Molecular Ecology . They offer a thorough and thoughtful guide to environmental association analysis (EAA) to provide a good starting point for those looking to start a project or just analyze their data. Data – You need some sort of genetic polymorphism data. While multiple varieties are acceptable to use in EAA, the review focuses on SNPs. You also need environmental data that matches the spatial resolution of the genetic data. The choice of data and its subsequent preparation involves a lot of thought. What resolution am I using? Are these variables correlated? How many loci/samples are appropriate? Rellstab et al do a nice job of reviewing these issues and providing citations for multiple solutions and approaches. Sampling design – If you want to ask questions about environmental variation, one of the go-to sampling schemes in biology is finding a gradient to sample along. While this seems like second nature to most, sampling along a gradient also makes replication (within environmental variables and evolutionary lineages) a difficult task. Also, gradients are usually designed around one variable (temperature, salinity, etc), but the effects of environment on genetic data may be due to co-varying environmental parameters that aren’t accounted for. Other sampling schemes, like sampling more intensely in areas that are categorically different from one another (low vs high temperature sites) or sampling broadly across the entire range or niche of a taxon, offer their own disadvantages. Essentially, environmental variables and genetic data often co-vary in complex ways that make it difficult to suggest a “best” study design. Instead, keep these disadvantages in mind while maximizing the fit between genetic/environmental data and what hypothesis you really want to test. Environmental Association Analysis To the main point: what are my options for EAA? For categorical factors – This is pretty straightforward and the most traditional approach: compare allele frequencies between different categories of environmental variables in replicate. This may be the best approach is you have a very specific environmental variables, a strong expectation of what sort of statistical spread of that variable is required, and the ability to reliably replicate those conditions. For most, this is rare. Logistic regressions – This family of analyses quantify the relationship between an environmental factor and the presence/absence of an allele. Sampling individuals from diverse habitats or along environmental gradients is ideally suited for this type of analysis. Software applications of logistic regressions for EAA include the spatial analysis method (SAM) and the recent extended version SAMβADA. Matrix correlations – These analyses test for correlations between matrices of distance or dissimilarity. This includes full and partial Mantel tests, a subject we’ve written about previously on this blog. General linear models – These models treat the environmental response variables as a linear function of genetic variables. If that seems backwards to you, you aren’t alone: In EAA, however, environment instead of phenotype is used as response variable. Since the environment experienced by an organism is not caused by its genotype, this might seem conceptually counterintuitive. It is assumed, however, that environmental factors that are strongly correlated with heritable traits can replace them in statistical models. For multivariate versions of these analyses, canonical correlation analysis (CCA) or redundancy analysis (RDA) offer the ability to account for polygenic adaptive traits. In the case of RDA, model testing among variations of environmental parameters, neutral genetic structure, or spatial effects are available. Mixed effects models – Finally, the extensive set of analyses that model allele frequencies (response variable), environmental parameters (fixed factors), and neutral genetic structure (random factor). The advantage here is a standardized and statistically intuitive way to deal with neutral genetic structure. However, each program/technique has a different way to test for significance and choosing the type of association (linear, rank, logistic). Some options here include BAYENV2, latent factor mixed models (LFMMs), efficient mixed-model association (EMMA), and TASSEL. Your best bet: mix and match When there are so many methods available, there are bound to be contrasting strengths and weaknesses among them. The authors provide suggestions for combinations of EAAs that might help in various scenarios. For example, want to pull apart potentially-adaptive loci? How about first performing an outlier test (BAYESCAN, FDIST, ARLEQUIN) and then feed those outliers into an EAA? If you have the appropriate sampling scheme and data, leaning on analyses that explicitly account for neutral genetic variation (mixed effect models) provides the most straightforward solution. This is the most in-depth section of the review, and for good reason. The identification of adaptive loci and the environmental variation that shapes them requires validation. Lots of it. Either from within a dataset, between analysis, or by other researchers, validation is key for the generalization of these phenomenon. The good news is that there are now, more than ever, analytical resources for finding environmental associations to start with. But if your Holy Grail is finding the gene(s) that determine the adaptation of wild organisms to their environments, you have a whole career’s worth of work in front of you: Many studies, including most of those described in this review, perform EAA, present a list of interesting loci, compare it to GO databases and stop there, i.e., half way to the goal of identifying those genes that are functionally involved in local adaptation of natural populations. Instead, studies should go further and test their findings using, e.g., independent populations, knock-out mutants, common garden and reciprocal transplant experiments. The effort of such follow-ups should, however, not be underestimated. The most practical advice? Read the paper and get to work! Rellstab, C., Gugerli, F., Eckert, A. J., Hancock, A. M., & Holderegger, R. (2015). A practical guide to environmental association analysis in landscape genomics. Molecular Ecology. DOI: 10.1111/mec.13322
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100258.29/warc/CC-MAIN-20231130225634-20231201015634-00516.warc.gz
CC-MAIN-2023-50
7,075
25
https://medium.com/@decodoku/the-history-of-games-for-quantum-computers-a1de98859b5a
code
The History of Games for Quantum Computers It has been over a year since the first ever quantum computer game was created. It is time to write their history so far. It began in the bygone age of 2017, a man had an idea: To make a game for a quantum computer. Hardware wouldn’t be an issue: IBM had provided that through the IBM Q Experience. The software was also there: Project Q, a quantum SDK capable of running jobs on IBM’s devices, had recently been released. All that was needed was a game. The game chosen was Rock-Paper-Scissors. It was adapted to suit the strengths of qubits, and the first quantum game was made. It was called Cat-Box-Scissors. I just made a computer game. Not for the PS4 or XBox or even the Nintendo Switch. I made it for a quantum computer. I…medium.com It wasn’t very good, truth be told. Just a simple first experiment. Little more than a random number generator with a story. Quantum computers deserved something better. And so, only a little later, came the first multiplayer game made with a quantum computer. With techniques once used to probe fundamental properties of the universe, this game would play Battleships. Like normal Battleships, but simpler and more complex at the same time.medium.com These first quantum computer games were simple examples, limited to the command line. They ran on the quantum processor in real time, which meant waiting in the job queue. Unfortunately, waiting for snippets of text is few people’s idea of fun. So what about a game based on preexisting data? The quantum computer can generate everything needed beforehand, allowing the game to run in the fast and responsive way we all expect from modern programs. May brought two quantum games. One made with the same philosophy of Quantum Solitaire, running on preexisting data. It was inspired by Hunt the Wumpus, an early dungeon crawler. This was Hunt the Quantpus. It also brought a new version of Battleships. This was another game to run in real time. But rather than using Project Q to handle the software, as before, it used QISKit: IBM’s newly released native SDK for their hardware. This was the first real-time quantum game to run for multiple rounds. As such, it was the first to go beyond just being a fancy random number generator. It was also created with a noble purpose: To help people to learn quantum programming. Due to its increased sophistication, all previous games were downgraded to being mere experiments. Battleships with partial NOT gates was the true first quantum computer game. May 2017 is also notable for being the month that IBM announced their 16 qubit processor. Like all cloud based quantum processors, it was destined to one day play games. Every game so far was designed by just one guy. Me! In June of 2017, that changed. Rigetti, a quantum computing startup, released their own quantum SDK. At the same time they also made a simple game, aimed at providing a little demonstration of quantum computing. Check it out on their website here. It doesn’t strictly count as a quantum game, since it uses a simulated quantum computer rather than a real one. But Rigetti have the hardware and software to implement it on a real device, so it certainly deserves a mention. A similar simple demonstration was also made by IBM back in March 2017 (see here). Though this one arguably doesn’t quite count as being a game, and never claims to be one, it deserves a mention too. I was still developing my own quantum games, of course. June brought a new entry into the long-running Quantum Battleships franchise. As a further tutorial in quantum programming came Battleships with complementary measurements. June also had the BlueYard Quantum Leap event: a meeting of researchers, startups, investors and journalists all interested in quantum computing. I managed to wangle a ticket as the guy who makes quantum games. There I had a chat with Will Zeng of Rigetti about Spacewar!, one of the first games for normal computers. I also talked with Alan Ho from Google about some thoughts I had on their proposal for ‘quantum supremacy’, and with Jerry Chow of IBM about their 16 qubit device. These conversations went on to inspire some of what was to happen in August. In July, a section was added to the tutorial for the QISKit quantum SDK. It had been promised for a while, but in this month it was finally founded with a notebook on the ‘Quantum Counterfeit Coin Problem’. A collection of Jupyter notebooks using Qiskit. Contribute to Qiskit/qiskit-tutorial development by creating an account…github.com This isn’t really a game in the standard sense. It’s something that you can study with game theory, but not really something that you play. This is also true of another early addition to this section: Quantum Magic Squares (this has since been moved elsewhere in the tutorial). Nevertheless, this section of the tutorial was intended to contain games that can be played as well. Accordingly, it soon became the home of Battleships with partial NOT gates. Back in July of 2016, a bunch of mostly Googlers published a paper. It was a plan on how to prove what they call ‘Quantum Supremacy’: That quantum computers can be better than normal ones at some tasks. The eventual demonstration of this likely won’t be for a useful task. It will be for something quite abstract, something that is heavily biased in favour of the quantum contender. It will also take a good decade or so after the first claims of ‘supremacy’ for true quantum computers to emerge. Nevertheless, the more interesting we can make the task, the more interesting the result will be. The more relatable we make the task, the more understandable the result will be. So let’s make it into a game! That was the idea I first had back at the start of 2017. It slowly developed over the first half of the year, and almost got abandoned. But the conversations I had at the BlueYard event in June focussed my thinking. In August, it emerged. Quantum Supremacy in game form: Quantum Awesomeness. Since last year, we’ve seen a lot of big announcements about prototype quantum computers. From big companies like IBM…medium.com Like Spacewar! before it, this was a game design to use the hardware to its limits. It would be able to provide context for a supremacy result. It would also provide benchmarking data for devices too small or noisy for supremacy. In this game, the size and connectivity of a quantum processor is presented in the relatable form of a puzzle. Imperfections and noise became an increase in difficulty. With this game, players from any background could start to understand what current devices are really like. The first version was hardwired for a particular device: IBM’s 16 qubit processor. So that’s where we got the first results. Quantum computers have got a lot of attention recently. Though some has been balanced and well informed, that has not…medium.com August was also a big month for Battleships with partial NOT gates. It became part of the QISKit tutorial section for games, as created in July. A version adapted to be a bit more playable was also created, with players given some text to read while they wait in the queue. A playthrough of this was recorded and put on YouTube. Though intended only as something to be shown to a few people (and so made without narration or background music), it has been viewed by a few thousand brave souls. The reason for this more playable version was to be part of an event in Aarhus. This was hosted by ScienceAtHome, who make great games about quantum computers. Recently we reported our participation from the Quantum Breathing exhibition and the Quantum Game Cafe. At the cafe…www.scienceathome.org August then ended with the first conference talk regarding games running on quantum computers. This happened at Gamescom in Cologne, one of the world’s biggest trade fairs for gaming. View analytic Feedback form is now closed. devcom: all Speakers Research Fellow, University of Basel I am a scientist…devcom2017.sched.com The Autumn months of 2017 were a quiet time for my efforts towards quantum games. I’d like to say that it was because I was busy with serious science stuff. But that would only be partially true. I also made a superposition of emoticons as a quantum version of Hello World. It wasn’t so quiet for others, though. A group of people at the University of Osnabrück made a game for a Comparative Machine Learning class. This didn’t just have quantum computing, but neuromorphic computing too! Duel-of-the-Numbers - A mixing of neuromorphic and quantum computing for the Comparative Machine Learning classgithub.com Quantum Awesomeness was designed so that any device could play it. No matter what architecture, or size, or connectivity. As long as you had a bunch of qubits, you could play Quantum Awesomeness on them. Though this was the theory, in practice the software only supported IBM’s 16 qubit device. It was time to widen the net. At the end of November, a bit overhaul was committed to GitHub. It could now play on the newly upgraded 16 qubit IBM device. It could play on their 5 qubit devices too. In each case, it used IBM’s QISKit SDK. Another big change was to include support for Project Q, which had been neglected by quantum game development for a few months. Support for Forest by Rigetti was soon added, inspired by their announcement of a 19 qubit processor. Quantum Awesomeness was among the first in the queue to run on this new device, with the first data coming in just before Christmas All data collected so far (from 3 IBM devices and and 1 Rigetti device) was then put up on GitHub. This allowed players to play Quantum Awesomeness games from all these quantum processors, without the need for direct access. If anyone wondered whether Rigetti’s 19 qubits were better than IBM’s 16, they could find out for themselves. Just by playing a game. Quantum Awesomeness continued in 2018. The project improved, and was used as the basis of articles to explain the current state of quantum computing. This includes text-based ‘Let’s Plays’ of games run on different devices. Since last year, we’ve seen a lot of big announcements about prototype quantum computers. From big companies like IBM…medium.com In late February, it finally became possible to play the game in a browser. No more cloning repos or configuring Jupyter required. These months also featured the first new game in a while, though it is technically more of a gamified tutorial. Through figuring out puzzles, a player can get their first taste of quantum computing. The tutorial itself runs on a normal computer. But it doesn’t end there! The program has additional modes that allow programs to be written and then run on a real device. So it just about counts as a game that runs on a quantum computer, even though neither the ‘game’ nor ‘runs on a quantum computer’ claims are very strong. In March, Microsoft and the University of Bristol teamed up for a one day Quantum Games Ideathon. The winning team quickly identified what is possibly the most challenging aspect of quantum games: We found it reasonably straightforward to conjure up a concept that either: (a) looked to be fun and engaging; or (b) faithfully represented the underlying quantum mechanics; but not both. In the end they came up with Cats: Quantum Supremacy, A Worms like game with quantum inspired weapons. It’s not clear whether the games for this event used Q#, Microsoft’s SDK for building quantum programs. It certainly wouldn’t have run on real quantum hardware, since there isn’t currently any attached to Q#. Nevertheless, the event deserves a mention in this history of quantum games. The end of March was also the deadline for IBM’s Teach Me QISKit award. It challenged entrants to create interesting Jupyter notebooks, to help teach others the basics of quantum programming. The winner, a simulation of the Ising model, was an excellent example of using quantum computers for scientific purposes. But another entrant was a simple game based on the exotic properties of quantum correlations. On the 6th and 7th April, Rigetti held the world’s first quantum hackathon. Or at least the first to allow programs to be run on real quantum hardware. There were participants from a wide range of backgrounds, doing all sorts of projects. Two teams even made games. One was done by myself and my team mates, Jonathan DuBois and M. Sohaib Alam. It was called Link to the Quantum. The part made by Sohaib was an implementation of the Meyer penny game: a battle between captain Picard and Q where a quantum computer completely changes the outcome. The game was later incorporated into the documentation for Rigetti’s quantum SDK. Also from the hackathon we got a fun mobile game, made by a team with Rigetti’s own Will Zeng. Since summer 2017, I had been working on a game in collaboration with IBM Research. The idea to let people create jobs to run on real devices, and do it via a mobile game. In the end we created a puzzle game to give people their first task of quantum programming: Hello Quantum. One version is targeted at casual gamers, and released on mobile devices. Hello Quantum is a puzzle game designed to teach introductory principles of quantum computing. This game was designed…helloquantum.mybluemix.net Though it doesn’t run on quantum computers at all, it aims to set players up to create their own programs on the IBM Q Experience. And provides everything you need to reproduce the puzzles and solutions on a real device. If you want to start playing with quantum computers, you’ve come to the right place. In this article we’ll get you…medium.com We also released an improved version of the command line variant of the game, which I mentioned above in Jan/Feb. Quantum Awesomeness had long been a game that people could play. But it hadn’t yet been presented as a piece of science. It was time to write a paper about it. Quantum processors with sizes in the 10-100 qubit range are now increasingly common. However, with increased size comes…scirate.com This was written as a study of how the quantum programs run by the game can help us understand and compare prototype quantum devices. Little reference to its nature as a game was made. One of the selling points of the paper is that it covered all quantum processors available to the public. Then, just as I was putting the finishing touches, Rigetti went and released a new device. An 8 qubit one this time. The initialization function for a QPUConnection object must be provided a speciffic Rigetti QPU as an argument, so that…docs.rigetti.com Fortunately, they let me have a go on it pretty quickly. So Quantum Awesomeness added a new device to its roster. What is the best platform for playing games? PC or console? Playstation or Xbox? How about something different: the unit testing capabilities of an SDK? This is what Microsoft did to help people learn to program quantum computers with their Q# language. The first four challenges, known as Quantum Katas, went online during this month. For those who want to explore quantum computing and learn the Q# programming language at their own pace, we have…cloudblogs.microsoft.com Again, this is an example of something that doesn’t actually run on a quantum computer. But since the programs you write certainly could run on real quantum computers (and since we have nothing else to talk about this month) it deserves a mention. This month had two games made in game jams. One was by me for the Ludum Dare jam. I tried to use quantum walks to make another game inspired by Hunt the Wumpus. It didn’t really amount to much, but you can find it here. A better and prettier game was made Desiree Vogt-Lee (who also maintains a great list of resources for learning about quantum). It’s called Quantum Cat-sweeper, and is based on Minesweeper (as you might have guessed!). It runs on a simulator, or on a 5 qubit IBM device. At the beginning of this month, I got a job working for IBM Research. One of my team mates from the Rigetti hackathon, Sohaib Alam, started a job at Rigetti at the same time. Quantum game design seems to be a career with better prospects than normal game design! The command line version of Hello Quantum got good reviews from the few people who found it. This was despite its rather rudimentary and ugly form. It was time to make it better, prettier and worth putting in the games folder of the Qiskit tutorial. The result is called Hello Qiskit, because it gamifies the process of making your first quantum programs in Qiskit. You could run the whole thing on a real quantum computer, but that probably wouldn’t be the best idea. Use the simulator until the very end, where you can switch to a real device to provide the unique nature of quantum variables. This month brought another chapter in the great saga of Quantum Awesomeness: A reimplementation purely in Qiskit, designed to live in the games folder of the Qiskit tutorial. Also, yet another post here on Medium about this game that everyone must be sick of by now! Quantum computers are built out of qubits. But just having lots of qubits is not enough.medium.com Hopefully more things by you and less by me! Appendix: Quantum game prehistory Universe Splitter debuted in 2009. It’s a coin flip app, whose uniqueness comes from the fact that it uses a quantum source of random numbers. This allows it to connect to some of the narrative around quantum physics. Specifically, it presents itself as a way of splitting the universe whenever you want to make a decision. Read reviews, compare customer ratings, see screenshots, and learn more about Universe Splitter. Download Universe…itunes.apple.com It’s not a game, but it has gamified elements. And it doesn’t run on a quantum computer, but it does use some quantum hardware. So it is a definite forerunner of games running on quantum computers, and an inspiration to show us what can be done even with just a drop of quantum In 2016 there were a whole bunch of projects combining quantum and games or gamification. For an incomplete list, see this session of talks about them from early 2017, as well as the Quantum Game with Photons. Though 2016 brought much quantum gamification, only three projects involved real devices. One was the IBM Q Experience. Run algorithms and experiments on IBM's quantum processor via IBM Cloud.quantumexperience.ng.bluemix.net Don’t be frightened off by the pumpkin. It’s just a relic from the Halloween reskin. The Q Experience is actually a non-scary way of creating simple programs and running them on real devices. It has graphics and a drag-and-drop mechanic that you might find in a game. Another was the Alice experiment. In order to learn more about the Alice experiment, we decided to talk with one of the first people trying the interface…www.scienceathome.org Since it was a project by ScienceAtHome, it definitely included gamified elements along with the science. But it was never considered to be a game. And though the hardware was certainly related to quantum computation, it wasn’t a quantum computer. The third was the Big Bell Test. Thanks to all Bellsters for their contribution and their interest in our research! Our measurement will have to go on…thebigbelltest.org This used human players to generate randomness, which was then used in a set of real quantum experiments. Something else notable had also come out of the University of Bristol a few years before. Welcome to Quantum in the Cloud. This is a project which aims to provide resources for anybody interested in quantum…www.bristol.ac.uk Again, it allowed access to real devices via the cloud. And again it had a relatively game like interface. Along with the Alice experiment and the Q Experience, this was an early example of allowing general access to cutting-edge quantum devices. Hopefully, they are the forerunners of many more.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257767.49/warc/CC-MAIN-20190524204559-20190524230559-00278.warc.gz
CC-MAIN-2019-22
20,031
103
https://products.aspose.cloud/cells/net/conversion/fods-to-html/
code
Convert FODS to HTML in the Cloud Excel & OpenOffice spreadsheet conversion with open source Cloud SDK for .NET FODS to HTML Conversion in .NET Apps - Create an account at Dashboard to get free API quota & authorization details CellsApiwith Client Id, Client Secret, Base URL & API version - Upload FODS file to default Cloud Storage with CellsApi.CellsSaveAsPostDocumentSaveAsmethod to get the resultant HTML file Get Started with Excel REST API Get Excel Cloud SDK for .NET source code from GitHub to compile the SDK yourself or head to the Releases for alternative download options. Also have a look at Swagger-based API Reference to know more about the Excel REST API. C# .NET Code for FODS to HTML Conversion // For complete examples and data files, please go to https://github.com/aspose-cells-cloud/aspose-cells-cloud-dotnet/ CellsApi instance = new CellsApi(clientId, clientSecret, apiVersion, baseurl); string name = BOOK1; SaveOptions saveOptions = new SaveOptions(); saveOptions.SaveFormat = "html"; instance.UploadFile(folder + @"\" + name, File.Open( @"C:\TestData\" +name), "DropBox"); var response = instance.CellsSaveAsPostDocumentSaveAs(name, saveOptions, "output.html", null, null, folder, "DropBox"); How can I get started with Aspose.Cells REST APIs?Quickstart not only guides through the initialization of Aspose.cells Cloud API, it also helps in installing the required libraries. Where can I see the release notes for Aspose.Cells Cloud API?Complete release notes can be reviewed at Aspose.Cells Cloud Documentation. Is it safe to convert FODS to HTML in the Cloud?Of course! Aspose Cloud uses Amazon EC2 cloud servers that guarantee the security and resilience of the service. Please read more about Aspose's Security Practices. What file formats are supported by Aspose.Cells Cloud API?Aspose.Cells Cloud can convert FODS to XML, XLT, XLTM, XLTX, SVG, XLSB, TXT, ODS, TSV, DIF, PDF, TIFF, MD, XLSM, MHTML, CSV and more. Checkout the complete list of supported file formats. I can not find the SDK for my favorite language. What should I do?Aspose.Cells Cloud is also available as a Docker Container. Try using it with cURL in case your required SDK is not available yet. I do not have time to set up. Is there a quick demo for FODS to HTML that I can try?Indeed! Checkout the FODS to HTML Conversion App. Other Conversion Options FODS What is FODS File Format? A file with .fods extension is a type of OpenDocument Spreadsheet document format that stores data in rows and columns. The format is specified as part of ODF 1.2 specifications published and maintained by OASIS. FODS files cannot be opened with Excel, another Spreadsheet software application by Microsoft. FODS files can be saved as ODS using LibreOffice and can be converted to other formats such as XLS and XLSX.Read More HTML What is HTML File Format?
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651815.80/warc/CC-MAIN-20230605085657-20230605115657-00457.warc.gz
CC-MAIN-2023-23
2,842
22
http://www.linuxpromagazine.com/Issues/2005/52/DeskTOPia-Skippy
code
Working with the Skippy Screen Pager If your window manager is too boring or Spartan for your liking, why not add a touch of pep? Skippy is an imaginative screen pager with an integrated preview function. Almost any window manager will give you a window list, displaying a menu with the active windows when you click or press the right key. If the programmer who developed the window manager has a soft spot for graphical gimmicks, the list might add icons to the program names. But a window chooser will not speed up the process of switching between windows if you are working with a selection of different browsers and terminal windows. Skippy by Hyriand to the rescue: instead of giving you a simple list, Skippy displays the active application windows graphically in full-screen Window managers have different approaches to handling active windows, and Skippy is choosy about the managers it supports. To ensure that you will be able to switch between GUI-based programs, you need a Gnome- or NetWM-compatible window manager, such as Waimea. The homepage for the window manager or a quick glance at the Readme file supplied with the manager should tell you if this is the case. Also, the Skippy developers have a list of window managers that Skippy supports on the project homepage. Supported managers include Fluxbox 0.9.9, XFWM4 icewm and WindowMaker. Buy this article as PDF New partnership will bring more and better CS training to US schools Criminals offer online help over Tor network Sophisticated malware is still present on Joomla and WordPress sites around the world. New release marks the arrival of AMD’s unified driver strategy. A new study by IDC charts big changes in the big hardware market. Azure CTO says Redmond has already considered the unthinkable. Lead developer quells rumors that the Debian version is slated for center stage. MSBuild is now just another GitHub project as Redmond continues its path to the light. Malware could pass data and commands between disconnected computers without leaving a trace on the network.
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928520.68/warc/CC-MAIN-20150521113208-00335-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
2,054
44
http://virtualization.sys-con.com/node/2962670
code
|By Business Wire|| |February 10, 2014 10:30 AM EST|| Hortonworks, the leading contributor to and provider of enterprise Apache™ Hadoop®, and Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, announced today they have expanded their relationship through the formation of a strategic alliance encompassing the integration of product lines, joint go-to-market initiatives and collaborative customer support. The companies also announced the availability of the beta program for the Hortonworks Data Platform (HDP) plug-in for Red Hat Storage. Hadoop continues to gain traction in enterprise big data implementations1. A recent IDC study commissioned by Red Hat, titled “Trends in Enterprise Hadoop Deployments” reports, “... 32% of respondents indicated that their firms have existing Hadoop deployments. An additional 31% indicated that they had plans to deploy it within 12 months. And finally 36% said that their Hadoop deployment schedule could go beyond 12 months.” A comprehensive open source approach to the Hadoop platform can address the growing requirements of key enterprise stakeholders and their big data initiatives. With an enterprise Apache Hadoop platform that is tightly integrated with open hybrid cloud technologies, such as Red Hat Enterprise Linux OpenStack Platform, OpenJDK, Red Hat JBoss Middleware, and Red Hat Storage, the two companies plan to deliver infrastructure solutions that enable the next generation of data-driven applications. This will enable enterprise stakeholders to more quickly analyze big data to realize business insights and value. Immediate initiatives that are included in the expanded Hortonworks and Red Hat strategic alliance include - Data architects will be able to combine data in a single, scalable open source repository. Available in beta software version, the Hortonworks Data Platform (HDP) combined with Red Hat Storage provides a secure and resilient general-purpose storage pool with multiple interfaces, including Hadoop, POSIX and OpenStack Object Storage (Swift). This can improve flexibility and speed the development and deployment of new and existing analytic workflows. - Enterprise operators and application developers will be able to elastically scale their Hadoop infrastructure to meet changing demand with an open and flexible platform that supports a full range of deployment scenarios – from physical to virtual and cloud. HDP combined with Red Hat Enterprise Linux and OpenJDK provides faster development of enterprise-strength analytic applications. HDP combined with Red Hat Enterprise Linux OpenStack Platform gives elastic Hadoop services on a secure, private cloud infrastructure to lower costs and improve flexibility. - Developers will be able to quickly build new analytic applications to take advantage of new data types and data analysts will have improved access to data through standard interfaces. HDP combined with Red Hat JBoss Data Virtualization integrates Hadoop with existing information sources including data warehouses, SQL and NoSQL databases, enterprise and cloud applications, and flat and XML files. The solution creates business-friendly, reusable and virtual data models with unified views by combining and transforming data from multiple sources including Hadoop. This creates integrated data available on-demand for external applications through standard SQL and web services interfaces. To learn more about this announcement and to hear from Hortonworks and Red Hat executives, join us for a press conference webcast beginning at 11:00 am ET today. The keynote and Q&A session will be available on demand following the live event. To register for and join the event or to access the on-demand replay content, visit https://vts.inxpo.com/Launch/QReg.htm?ShowKey=18358. Shaun Connolly, vice president, corporate strategy, Hortonworks “At the rapid rate that enterprises expand their Hadoop requirements – due to the business consistently identifying new use cases and more internal stakeholders – the Red Hat and Hortonworks strategic alliance provides a seamless approach to enabling the next generation of data-driven applications. Our mutual customers complement both their Hadoop strategy and commitment to community-driven open source innovation.” Ranga Rangachari, vice president and general manager, Storage and Big Data, Red Hat “Data – specifically data running processed with Hadoop – is the killer application for the open hybrid cloud. Enterprises are looking to IT solution providers to help with a dramatic reduction in time-to-results for their big data projects. Red Hat’s strategic alliance with Hortonworks is focused on helping customers with efficiency and agility as they embark on big data projects.” Matthew Aslett, research director, data management and analytics, 451 “Hortonworks and Red Hat are natural partners given the strong commitment to open source on both sides. The strategic alliance will benefit Hortonworks and Red Hat customers looking to develop and deploy Hadoop applications, as well as the wider Hadoop community as the results of joint-development work are contributed back into Apache Hadoop.” Mike Peterson, vice president, platforms and data architecture, “We use Hadoop to capture and securely store an unprecedented amount of data, which has helped transform the way we do business. By taking another step to make Hadoop even more powerful, Hortonworks and Red Hat are helping enterprises like ours derive even more value from their big data strategy. Open source technologies from companies like Hortonworks and Red Hat continue to provide Neustar with the high levels of innovation required to meet the needs of our customers and grow our business.” - Learn more about the beta program for the HDP plug-in for Red Hat Storage - Learn more about Red Hat and Hortonworks open source collaboration - Learn more about Hortonworks - Learn more about Hortonworks Data Platform - Learn more about Red Hat Storage - Learn more about Red Hat Enterprise Linux - Learn more about Red Hat Enterprise Linux OpenStack Platform - Learn more about Red Hat JBoss Data Virtualization Connect with Red Hat - Learn more about Red Hat - Get more Red Hat news or subscribe to the Red Hat news RSS feed - Follow Red Hat on Twitter - Join Red Hat on Facebook - Watch Red Hat videos on YouTube - Join Red Hat on Google+ Hortonworks is the only 100-percent open source software provider to develop, distribute and support an Apache Hadoop platform explicitly built and tested for enterprise-grade deployments. Developed by the original architects, builders and operators of Hadoop, Hortonworks stewards the core and delivers the critical services required by the enterprise to reliably and effectively run Hadoop at scale. Our distribution, Hortonworks Data Platform, provides an open and stable foundation for enterprises and a growing ecosystem to build and deploy big data solutions. Hortonworks also provides unmatched technical support, training and certification programs. For more information, visit www.hortonworks.com. Get started and Go from Zero to Hadoop in 15 Minutes with the Hortonworks’ Sandbox. About Red Hat, Inc. Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Learn more at http://www.redhat.com. Red Hat’s Forward-Looking Statements Certain statements contained in this press release may constitute "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: risks related to delays or reductions in information technology spending; the effects of industry consolidation; the ability of the Company to compete effectively; the integration of acquisitions and the ability to market successfully acquired technologies and products; uncertainty and adverse results in litigation and related settlements; the inability to adequately protect Company intellectual property and the potential for infringement or breach of license claims of or relating to third party intellectual property; the ability to deliver and stimulate demand for new products and technological innovations on a timely basis; risks related to data and information security vulnerabilities; ineffective management of, and control over, the Company's growth and international operations; fluctuations in exchange rates; and changes in and a dependence on key personnel, as well as other factors contained in our most recent Quarterly Report on Form 10-Q (copies of which may be accessed through the Securities and Exchange Commission's website at http://www.sec.gov), including those found therein under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations". In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic and political conditions, governmental and public policy changes and the impact of natural disasters such as earthquakes and floods. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of this press release. Red Hat, Red Hat Enterprise Linux, the Shadowman logo and JBoss are registered trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is a registered trademark of Linus Torvalds. The OpenStack mark is either a registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the United States and other countries, and is used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. ©2014 Hortonworks Inc. All rights reserved. Hortonworks is a trademark of Hortonworks Inc. in the United States and other jurisdictions. 1 IDC White Paper, Trends in Enterprise Hadoop Deployments, commissioned by Red Hat, August 2013. Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of robomq.io, and Fred Yatzeck, principal architect leading product development at robomq.io, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at the same time reduce Time to Market (TTM) by using plug and play capabilities offered by a robust IoT ... Jun. 29, 2015 06:00 PM EDT Reads: 1,759 The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry. Jun. 29, 2015 12:19 PM EDT Reads: 673 SYS-CON Events announced today that BMC will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BMC delivers software solutions that help IT transform digital enterprises for the ultimate competitive business advantage. BMC has worked with thousands of leading companies to create and deliver powerful IT management services. From mainframe to cloud to mobile, BMC pairs high-speed digital innovation with robust IT industrialization – allowing customers to provide amazing user experiences with optimized IT per... Jun. 29, 2015 12:15 PM EDT Reads: 2,561 There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee. Jun. 29, 2015 11:00 AM EDT Reads: 2,116 Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years. Jun. 29, 2015 11:00 AM EDT Reads: 1,883 It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies. Jun. 29, 2015 10:00 AM EDT Reads: 2,037 The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will addresses this very serious issue of profound change in the industry. Jun. 29, 2015 09:45 AM EDT Reads: 2,446 Business as usual for IT is evolving into a "Make or Buy" decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud business applications and services across multiple cloud delivery models. Jun. 28, 2015 11:00 AM EDT Reads: 2,174 In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology. Jun. 27, 2015 10:00 AM EDT Reads: 2,192 Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fillin... Jun. 26, 2015 12:00 PM EDT Reads: 2,184 Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d... Jun. 26, 2015 10:00 AM EDT Reads: 2,045 Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, discussed IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sectors. Jun. 25, 2015 02:00 PM EDT Reads: 1,987 In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discussed the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation. Jun. 25, 2015 01:30 PM EDT Reads: 2,132 SYS-CON Events announced today that the "Second Containers & Microservices Conference" will take place November 3-5, 2015, at the Santa Clara Convention Center, Santa Clara, CA, and the “Third Containers & Microservices Conference” will take place June 7-9, 2016, at Javits Center in New York City. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Jun. 22, 2015 02:15 PM EDT Reads: 2,713 SYS-CON Events announced today that the "First Containers & Microservices Conference" will take place June 9-11, 2015, at the Javits Center in New York City. The “Second Containers & Microservices Conference” will take place November 3-5, 2015, at Santa Clara Convention Center, Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Jun. 20, 2015 12:00 PM EDT Reads: 3,834 With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an... Jun. 15, 2015 08:45 PM EDT Reads: 4,064 17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service. Jun. 15, 2015 07:15 PM EDT Reads: 3,860 The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today! Jun. 15, 2015 10:15 AM EDT Reads: 5,916 In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discusses the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation. Jun. 11, 2015 08:00 AM EDT Reads: 2,315 The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago. Jun. 10, 2015 06:00 PM EDT Reads: 3,310
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093899.18/warc/CC-MAIN-20150627031813-00213-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
22,618
82
http://scaledagileframework.com/architectural-feature/
code
You can’t chemical your way out of soil infertility. – Joel Salatin Architectural Features Abstract Every system, whether new or not, eventually requires significant architecture changes in order to be able to support required new business functionality. Therefore, system architecture must be continuously evolved to react to the changing business environment or development velocity will decrease precipitously. In the Framework, we drive system-level architecture initiatives via a specific program backlog item type, Architectural Features. Architectural features are technical system services that allow developers to implement business features that deliver solution value to the end users and the enterprise. Like business features, architectural features are maintained in the program backlog, where they are identified, prioritized, and justified based on economics. System Architects are often responsible for identifying, splitting and prioritizing these features. Implementation is the responsibility of the Agile teams – a result of collaboration between Agile teams, the system architect, the system team and Product Management. Architectural features must be split as necessary to fit within PSI boundaries as failure to do so may create an unstable system, which defeats the purpose of the PSI. As with solution features, splitting architectural features and implementing them incrementally is both art and science, and an integral skill necessary for implementing agility at program, and indeed enterprise, scale. Sources of Architectural Features Architectural features typically arise from three primary sources: - As a result of splitting Business and Architecture Epics from the Portfolio Level. For example, a business epic such as “Implement a common install across all products in the suite”, might drive an architectural feature such as “Adapt Corporate Chat Service to use InstallShield”. - From the Program Vision and Roadmap. For example, a search system might need the ability to “search by keywords”, “build automatic summary of a topic” and “filter search results”. The initial technical response might be to “build an entity extraction mechanism”. - From some problem with the existing solution. For example a need to “enhance performance” might drive a Nonfunctional Requirement such as “display all user screens in less than 3 sec”. In fact, many NFRs come into existence as a result of architectural features, and they tend to build over time as is illustrated in Figure 1. Analysis and Modeling of Architectural Features Every architectural feature can be thought of as a transformation of a system, in which the new system acquires new qualities, as is illustrated in Figure 2. To achieve this transformation, system models, internal system logic and Non-Functional Requirements (NFRs) change as well. For example, in order to achieve a response time less than 3 sec, internal logic might need to be refactored such that “select data from tables X, Y and Z…” would be transformed into “create indexes for the required fields; select data from X, Y, and Z…; keep the dataset in-memory until modified”. Thinking through the changes is just part of the analysis needed. This analysis also includes: - Identifying affected areas of the solution - Splitting the feature - Identifying acceptance criteria - Estimating implementation effort, and - Assessing impact and risks Implementation requires a synchronized effort of the various “players” in the program. Agile Teams implement the code for the architectural feature and also: - Exchange new knowledge about practices and approaches in implementing testing scenarios - Translate related interfaces, algorithms and component interaction patterns - Manage dependencies and redundancies as a result of system-wide refactors. The System Architect supports implementation and typically has the following responsibilities: - Drives a shared understanding of the feature and understand the plans for implementation - Attend Agile teams’ planning, backlog grooming, and demo meetings as a subject matter expert and stakeholder - Work with Product Owners to define acceptance criteria for the features and the technical Stories that result - Work with the System Team to understand infrastructure changes required to integrate and test the new system - Use Weighted Shortest Job First, WSJF to prioritize architectural features. The System Team: - Implements changes in the integration and testing infrastructure - Creates new NFR tests to ensure the verification of the system requirements required by the architectural feature - Provide Agile teams with data sets and test cases for testing associated NFRs. - Assures that the Program Backlog is optimized with the right balance of business and architectural features - Ensures that architecture features are implemented in a sequence that facilitates a flow of business features - Works with product owners to understand the relative impacts and dependencies between business functionality and system qualities. Architectural features are critical artifacts used to build and extend the Architectural Runway in support of the ongoing implementation of new business features. They provide a mechanism to implement architectural epics within the context of each program and each Potential S.. Increment PSI. Agile teams collaborate with system architects, product management, and the system team to implement architectural features in support of effective technical evolution of the system. Leffingwell, Dean. Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise. Addison-Wesley, 2011. Last update: 15 January, 2013 © 2011-2013 Leffingwell, LLC. All rights reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382920/warc/CC-MAIN-20130516092622-00060-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
5,796
41
http://sqa.stackexchange.com/tags/learning/new
code
New answers tagged learning For this first you need to create an excel, in which yo have specified first row as the column/field names like FirstName, LastName, Address and DOB etc. in the first row and values in the subsequent rows of the same sheet. Save this file (better to save as CSV as they are more lighter than Excel and will not decrease performance when you have a number of ... Top 50 recent answers are included
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066275.44/warc/CC-MAIN-20150827025426-00086-ip-10-171-96-226.ec2.internal.warc.gz
CC-MAIN-2015-35
424
3
https://education.ti.com/en/activity/detail?id=FCC3FB3D96244A45B35DFC519106A21D
code
Students will consider a model of a population living in an unfavorable habitat. Given an initial population students will describe the population as timepases. Before the Activity See the attached Activity PDF file(s) for detailed instructions for this activity. During the Activity Follow the procedures outlined in the activity. Describe the population as time passes from a given initial population. After the Activity Review student results As a class, discuss questions that appeared to be more challenging Re-teach concepts as necessary
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817106.73/warc/CC-MAIN-20240416191221-20240416221221-00337.warc.gz
CC-MAIN-2024-18
543
10
http://fictional.answers.wikia.com/wiki/How_do_I_make_unlockable_characters_with_game_maker
code
Im trying to make a game with unlockable characters but every time I try, I get an error message... the code Im trying is: ///unlock_start/// unlockharru = 1 unlockyuudi = 1 unlockibici = 1 unlocktisue = 1 unlockyosht = 1 unlockshieka = 1 unlockryu = 0 (tells the game what to start out with) unlocksol = 0 unlockken = 0 unlockyarra = 0 unlockkohn = 0 unlockkrieg = 0 unlockkension = 0 unlockyon = 0 ///kohn_met/// (tells the game that kohn should be unlocked by now) unlockkohn = 1 ///kohnselection (create_event)/// if unlockkohn = o (kohnselection) = visible = false (tells the game to show kohn's icon if kohn has been met... if not then hide it) if unlockkohn = 1 (kohnselection) = visible = true (just useing kohn as a test player) when I run the game an error message comes up saying that unlockkohn is an unknown variable eventhough I already have the variables set in the unlock start obj. only in the code box its more structured than that.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00283.warc.gz
CC-MAIN-2017-34
950
6
http://seriouslyguys.com/reptile-hippie-unhappy-with-snake-hunt-florida-is-shocked/
code
In a move that probably shouldn’t surprise anyone, a species traitor reptile specialist in Florida isn’t too keen on the ongoing Python Challenge. The challenge is designed to aid the natural environment of Florida (the part that doesn’t include senior citizens, pre-zombies and Cuban pork sandwiches) by slowly but surely eradicating the invasive snakes, thus returning nature to its natural balance. Not only that, but it also pays the most successful hunter. See? Positive reinforcement! But no, Dr. Kevin Wright thinks that feral mammals actually do more damage than the pythons and suggests that we should find ways to unearth their biology through economic means. Clearly he has no memory for the Great Snail Miami Tidal Wave of 2011. The fool. We can’t throw money at the problem to fix it, but we can throw bullets.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202125.41/warc/CC-MAIN-20190319183735-20190319205735-00043.warc.gz
CC-MAIN-2019-13
831
3
https://www.visualcv.com/qianru-sun
code
Qianru Sun is currently a post-doctor in the department of Computer Vision and Multimodal Computing, Max-Planck Institute for Informatics, Saarbruecken, Germany. Collaborators are Prof. Bernt Schiele and Dr. Mario Fritz. She obtained her PhD degree in the School of Electronics Engineering and Computer Science, Peking University, in Jan. 2016. Her research interests include computer vision, pattern recognition and time series prediction. Specific application experiences have been made in human action recognition, prediction, anomaly detection in surveillance videos and social relation recognition in daily life photos.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00625-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
624
1
https://developer.apple.com/library/archive/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/ToolsOverview/ToolsOverview.html
code
Important: OpenGL ES was deprecated in iOS 12. To create high-performance code on GPUs, use the Metal framework instead. See Metal. Xcode OpenGL ES Tools Overview Xcode tools for debugging, analyzing, and tuning OpenGL ES applications are useful during all stages of development. The FPS Debug Gauge and GPU report summarize your app’s GPU performance every time you run it from Xcode, so you can quickly spot performance issues while designing and building your renderer. Once you’ve found a trouble spot, capture a frame and use Xcode’s OpenGL ES Frame Debugger interface to pinpoint rendering problems and solve performance issues. Effectively using the Xcode OpenGL ES features requires some familiarity with Xcode’s debugging interface. For background information, read Xcode Overview. Using the FPS Debug Gauge and GPU Report The FPS debug gauge and accompanying GPU report, shown in Figure B-1, provide a high-level summary of your app’s OpenGL ES performance while it runs. By monitoring these displays when developing your app, you can discover performance issues as they arise and consider where to focus your tuning efforts. The debug gauge and report contain the following displays: FPS Gauge. Shows the current animation rate of your app, in frames per second (FPS), and a recent history of FPS readings. Click this gauge to display the GPU report in Xcode’s primary editor. Frames Per Second. Shows the current frame rate, relative to the target frame rate set by your app (often 30 or 60 FPS). A blue arc indicates the recent range of FPS readings. Utilization. Shows three bars, breaking down your app’s use of the different processing resources on the GPU and indicating the possible locations of performance bottlenecks in your use of graphics hardware. The Tiler bar measures use of the GPU’s geometry processing resources. High tiler utilization can indicate performance bottlenecks in the vertex and primitive processing stages of the OpenGL ES pipeline, such as using inefficient vertex shader code or drawing an excessive number of vertices or primitives each frame. The Renderer bar measures use of the GPU’s pixel processing resources. High renderer utilization can indicate performance bottlenecks in the fragment and pixel processing stages of the OpenGL ES pipeline, such as using inefficient fragment shader code or processing additional fragments each frame for color blending. The Device bar shows overall GPU usage, incorporating both tiler and renderer usage. Frame Time. Shows the time spent processing each frame on both the CPU and GPU. This graph can indicate whether your app makes effective use of CPU/GPU parallelism. If your app spends more time in CPU processing, you may be able to improve performance by moving work to the GPU. For example, if each frame requires many similar glDrawElementscalls, you can use hardware instancing to reduce CPU overhead. (For details, see Use Instanced Drawing to Minimize Draw Calls.) If your app spends more time in GPU processing, you may be able to improve performance by moving work to the CPU. For example, if a shader performs the same calculation with the same result for every vertex or fragment during a particular draw call, you can perform that computation once on the CPU and pass its result to the shader in a uniform variable. (See Use Uniforms or Constants Instead of Computing Values in a Shader.) Program Performance. Only appears after you capture a frame (see Capturing and Analyzing an OpenGL ES Frame below), showing the time spent in each shader program while rendering the captured frame, both in milliseconds and as a percentage of the total frame rendering time. Expanding the listing for a program shows the draw calls made using that program and the rendering time contribution from each. Select a program in the list to view its shader source code in the assistant editor, or click the arrow icon next to a draw call to select that call in the frame navigator (see Navigator Area below). When tuning your app, you can use this graph to find opportunities for optimization. For example, if one program takes 50% of the frame rendering time, you gain more performance by optimizing it than by improving the speed of a program that accounts for only 10% of frame time. Though this view organizes frame time by shader program, remember that improving your shader algorithms isn’t the only way to optimize your app’s performance—for example, you can also reduce the number of draw calls that use a costly shader program, or reduce the number of fragments processed by a slow fragment shader. Problems & Solutions. Only appears after Xcode analyzes a frame capture (see Capturing and Analyzing an OpenGL ES Frame), this area lists possible issues found during analysis and recommendations for improving performance. When you make changes to a GLSL shader program in a captured frame (see Editing Shader Programs below), the Frame Time and Program Performance graphs expand to show both the baseline rendering time of the frame as originally captured and the current rendering time using your edited shaders. Capturing and Analyzing an OpenGL ES Frame For a detailed look at your app’s OpenGL ES usage, capture the sequence of OpenGL ES commands used to render a single frame of animation. Xcode offers several ways to begin a frame capture: Manual capture. While running your app in Xcode, click the camera icon in the debug bar (shown in Figure B-2) or choose Capture OpenGL ES Frame from the Debug menu. Breakpoint action. Choose Capture OpenGL ES Frame as an action for any breakpoint. When the debugger reaches a breakpoint with this action, Xcode automatically captures a frame. (See Setting Breakpoint Actions and Options.) If you use this action with an OpenGL ES Error breakpoint while developing your app (see Adding an OpenGL ES Error Breakpoint), you can use the OpenGL ES Frame Debugger to investigate the causes of OpenGL ES errors whenever they occur. OpenGL ES event marker. Programmatically trigger a frame capture by inserting an event marker in the OpenGL ES command stream. The following command inserts such a marker: When the OpenGL ES client reaches this marker, it finishes rendering the frame, then Xcode automatically captures the entire sequence of commands used to render that frame. After Xcode has captured the frame, it shows the OpenGL ES Frame Debugger interface. Use this interface to inspect the sequence of OpenGL ES commands that render the frame and examine OpenGL ES resources, as discussed in Touring the OpenGL ES Frame Debugger. In addition, Xcode can perform an automated analysis of your app’s OpenGL ES usage to determine which parts of your renderer and shader architecture can benefit most from performance optimizations. To use this option, click the Analyze button at the top of the GPU report (shown at the top right in Figure B-1). When you click the Analyze button, Xcode captures a frame (if one hasn’t been captured already), then runs your rendering code through a series of experiments using the attached iOS device. For example, to see if your rendering speed is limited by texture sizes, Xcode runs the captured sequence of OpenGL ES commands both with the texture data your app submitted to the GPU and with a size-reduced texture set. After Xcode finishes its analysis, the Problems & Solutions area of the GPU report lists any issues it found and suggestions for possible performance improvements. Touring the OpenGL ES Frame Debugger After Xcode captures a frame, it automatically reconfigures its interface for OpenGL ES debugging. The OpenGL ES Frame Debugger interface modifies several areas of the Xcode workspace window to provide information about the OpenGL ES rendering process, as shown in Figure B-3 and Figure B-4 and summarized below. (The frame debugger does not use the inspector or library panes, so you may wish to hide Xcode’s utility area during OpenGL ES debugging.) In the OpenGL ES frame debugger interface, the debug navigator is replaced by the OpenGL ES frame navigator. This navigator shows the OpenGL ES commands that render the captured frame, organized sequentially or according to their associated shader program. Use the Frame View Options popup menu at the top of the frame navigator to switch between view styles. View Frame By Call View the captured frame by call when you want to study OpenGL ES commands in sequence to pinpoint errors, diagnose rendering problems, or identify common performance issues. In this mode, the frame navigator lists commands in the order your app called them. Error or warning icons appear next to commands that result in OpenGL ES errors or that may indicate performance issues. You can add structure to this list by using the glPopGroupMarkerEXT functions to annotate groups of OpenGL ES commands—these groups appear as folders you can expand or collapse to show more or less detail. (For details, see Annotate Your OpenGL ES Code for Informative Debugging and Profiling.) You can also expand an OpenGL ES command to show a stack trace indicating where in your application code the command was issued. Use the context menu to choose whether to abbreviate command names and which commands, groups, and warnings to show. Use the flag icon at the bottom of the navigator to switch between showing all OpenGL ES commands and showing only those which draw into the framebuffer. Clicking an OpenGL ES command in the list navigates to that point in the OpenGL ES command sequence, affecting the contents of other areas of the frame debugger interface, as discussed below, and showing the effects of the OpenGL ES calls up to that point on the attached device’s display. View Frame By Program View the captured frame by program when you want to analyze the GPU time spent on each shader program and draw command. Expand the listing for a program to see the time contribution from each shader in the program and each draw call. Expand the listing for a draw call to show a stack trace indicating where in your application code that command was issued. Use the context menu to refine the display—you can choose whether programs are sorted by their time contributions and whether timing information is displayed as a percentage of the total rendering time. Clicking a program or shader shows the corresponding GLSL source code in the primary editor. Clicking an OpenGL ES command navigates to that point in the frame capture sequence. When working with a frame capture, you use the primary editor to preview the framebuffer being rendered to, and the assistant editor to examine OpenGL ES resources and edit GLSL shader programs. By default, the assistant editor shows a graphical overview of all resources currently owned by the OpenGL ES context, as shown in Figure B-3. Use the assistant editor’s jump bar to show only those resources bound for use as of the call selected in the frame navigator, or to select an individual resource for further inspection. You can also double-click a resource in the overview to inspect it. When you select a resource, the assistant editor changes to a format suited for tasks appropriate to that resource’s type. Previewing Framebuffer Contents The primary editor shows the contents of the framebuffer as rendered by the draw call currently selected in the frame navigator. (If the selected OpenGL ES command in the frame navigator is not a drawing command—for example, a command that sets state such as glUseProgram—the framebuffer reflects the rendering done by the most recent draw call prior to the selection.) You can also navigate the sequence of OpenGL ES commands using the jump bar at the top of the primary editor. The editor shows a preview for each framebuffer attachment currently bound for drawing. For example, most approaches to 3D rendering use a framebuffer with attachments for both color and depth, as illustrated in . Use the controls in the lower left of the editor to choose which framebuffer attachments are currently shown. Clicking the info button, left of each framebuffer attachment’s name, shows a popover detailing the attachment’s properties, as shown in Figure B-6. Click the settings button, right of the framebuffer attachment’s name, to show a popover with controls that adjust the preview image. For example, you can use these controls to make a certain range of Z values in a depth buffer more visible in its grayscale preview, as shown in Figure B-7. Each framebuffer attachment preview also shows a green wireframe highlighting the effect of the current draw call (as illustrated in Figure B-3). Use the context menu in a preview image to choose whether the highlight appears in the preview or on the display of the attached device. Editing Shader Programs When you select a shader program in the assistant editor’s jump bar or resource overview, the assistant editor shows the GLSL source code for that program’s fragment shader (as shown in Figure B-8). When you select a program in the frame navigator (see View Frame By Program), the primary editor shows the program’s fragment shader and the assistant editor shows its vertex shader. In any editor showing a fragment shader, you can use the jump bar to switch to its counterpart vertex shader, and vice versa. Each line of the shader source code is highlighted in the right margin with a bar representing its relative contribution to rendering time. Use these to focus your shader optimization efforts—if a few lines account for a greater share of rendering time, look into faster alternatives for those lines. (For shader performance tips, see Best Practices for Shaders.) You can make changes to the shader source code in the editor. Then, click the Update button below the editor (shown in Figure B-8) to recompile the shader program and see its effects on the captured frame. If compiling the shader results in error or warning messages from the GLSL compiler, Xcode annotates the shader source code for each issue. The recompiled shader program remains in use on the device, so you can resume running your app. Click the Continue button in the debug bar to see your shader changes in action. Inspecting Vertex Data When you inspect an array buffer, the assistant editor shows the contents of the buffer (see Figure B-9). Because a buffer in OpenGL ES memory has no defined format, you use the pop-up menus at the bottom of the editor to choose how its contents appear (for example, as 32-bit integers or floating-point values, or as twice as many 16-bit integers or half-float values), and how many columns Xcode uses to display the data. A vertex array object (VAO) encapsulates one or more data buffers in OpenGL ES memory and the attribute bindings used for supplying vertex data from the buffers to a shader program. (For details on using VAOs, see Consolidate Vertex Array State Changes Using Vertex Array Objects.) Because the VAO bindings include information about the format of the buffers’ contents, inspecting a VAO shows its contents as interpreted by OpenGL ES (see Figure B-10). Viewing Textures or Renderbuffers When you inspect a texture or renderbuffer, the assistant editor shows an image preview of its contents. You can use the same controls found in the primary editor to get more information about the texture object or renderbuffer and to adjust the image preview. For textures, you can use an additional control in the lower left corner of the assistant editor to preview each mipmap level of the texture and (if applicable) each face of a cube map texture (as shown in Figure B-11). The debug bar provides multiple controls for navigating the captured sequence of OpenGL ES commands (shown in Figure B-12). You can use its menus to follow the hierarchy shown in the frame navigator and choose a command, or you can use the arrows and slider to move back and forth in the sequence. Press the Continue button to end frame debugging and return to running your application. The frame debugger has no debug console. Instead, Xcode offers multiple variables views, each of which provides a different summary of the current state of the OpenGL ES rendering process. Use the popup menu to choose between the available variables views, discussed in the following sections. The All GL Objects View The All GL Objects view, similar to the Bound GL Objects view shown on the right in Figure B-13, lists the same OpenGL ES resources as the graphical overview in the assistant editor. Unlike the graphical overview, however, this view can provide more detailed information about a resource when you expand its disclosure triangle. For example, expanding the listing for a framebuffer or buffer object shows information otherwise available only through OpenGL ES query functions such as glGetFramebufferAttachmentParameter. Expanding the listing for a shader program shows its status, attribute bindings, and the currently bound value for each uniform variable. The Bound GL Objects View The Bound GL Objects view, shown on the right in Figure B-13, behaves identically to the All GL Objects view, but lists only resources currently bound for use as of the selected OpenGL ES command in the frame navigator. The GL Context View The GL Context view, shown on the left in Figure B-13, lists the entire state vector of the OpenGL ES renderer, organized into functional groups. When you select a call in the frame navigator that changes OpenGL ES state, the changed values appear highlighted. For example, calling the glFrontFace function changes and highlights values in the Culling section of the state list. Enabling blending with the glEnable(GL_BLEND) call or changing blending parameters with the glBlendFunc function changes and highlights values in the Blending section of the state list. The Context Info View The Context Info view, shown on the right in Figure B-14, lists static information about the OpenGL ES renderer in use: name, version, capabilities, extensions and similar data. You can look through this data instead of writing your own code to query renderer attributes such as The Auto View The Auto view, shown on the left in Figure B-14, automatically lists a subset of items normally found in the other variables views and other information appropriate to the selected call in the frame navigator. For example: If the selected call results in an OpenGL ES error, or if Xcode has identified possible performance issues with the selected call, the view lists the errors or warnings and suggested fixes for each. If the selected call changes part of the OpenGL ES context state, or its behavior is dependent on context state, the view automatically lists relevant items from the GL Context view. If the selected call binds a resource or makes use of bound resources such as vertex array objects, programs, or textures, the view automatically lists relevant items from the Bound GL Objects view. If a draw call is selected, the view lists program performance information, including the total time spent in each shader during that draw call and, if you’ve changed and recompiled shaders since capturing the frame, the difference from the baseline time spent in each shader. (Program performance information is only available when debugging on an OpenGL ES 3.0–capable device.) In addition, this view lists aggregate statistics about frame rendering performance, including the number of draw calls and frame rate.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476464.74/warc/CC-MAIN-20240304165127-20240304195127-00215.warc.gz
CC-MAIN-2024-10
19,454
80
https://www.biostars.org/p/234822/
code
Hi, Could someone please help me with removing reads from a fastq file from a specific genomic location? I have only been able to look at methods for removing reads from a specific chromosome from the aligned sam file, using samtools or from fastq using sequence IDs. I would like to remove PCR contaminants from my fastq files by giving specific genome coordinates. I appreciate your help! FASTQ files do not contain coordinates, so it is not possible to remove data based on that parameter. You would need to align and then filter, or filter by the sequence with one of the adapter-trimming tools (e.g., BBDuk or Trimmomatic). Instead of depending on genome co-ordinates you may want to use clumpify.sh from BBMap suite to identify duplicates (you can identify optical, PCR and other kinds) independent of alignments. Then depending on the severity of the issue decide what to do with them (just mark or remove). See this post for additional details on how you would use this tool: A: Introducing Clumpify: Create 30% Smaller, Faster Gzipped Fastq Files
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00530.warc.gz
CC-MAIN-2020-40
1,055
4
https://encodestudio.net/project/sodic-caeser-bay/
code
Our computational process is designed to find optimum solutions based on the project data and parameters. The process is cyclical from data to design to optimize the concept values. Our computational process method enables one to render a space where every object in the scene is linked to the others through an abounded of data. Data Values are derived from ; Areas ,matrix of relations between project elements [1.0 to 0.0] - Environmental Aspects [ Solar radiation-airflow-sharing - visual accessibility.etc] . By Generating all the possibilities in between the project elements we arrive at the optimum solutions for each type of villas. After it goes into a sub cyclical process of design development and evaluation.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657144.94/warc/CC-MAIN-20230610062920-20230610092920-00516.warc.gz
CC-MAIN-2023-23
721
1
https://www.animaapp.com/blog/inside-anima/anima-raises-2-5m-in-seed-round/
code
Anima raises $2.5M in seed round2 min read Back in 2017, when we started Anima, one thing was crystal clear. Handoff tools have improved our lives. But, there was still a critical gap in the design-to-development process. As a developer (married to a designer), I knew first-hand how tense design handoff can be. So did Michal and Or, my co-founders. We lived through the pain from both sides and felt a deep determination to solve it. And so, we set out to solve the problem. We founded Anima. Anima’s goal is to help product teams deliver better products, faster. Product and community first We graduated Y-Combinator Summer 2018 batch. At the time, we decided to focus on the product rather than fundraise. Converting design to high-quality code is a hard task. Winning the hearts of designers and engineers is 10x harder. We started with bringing code capabilities to design tools (Today an industry standard). Then, animations and interactions, then high-fidelity prototypes based on code. And all that time, we pursued our end goal — Automating the design-to-code process. Our fantastic community grew to over 300,000 designers and developers. Today, we have users from product teams at Google, IBM, Verizon, Salesforce, BlueJeans, Starbucks, and many other great companies. Bugsy. The team’s mascot Taking it to the next level By the end of 2019, Anima was already profitable. Companies worldwide were getting real value out of it. And, we’ve had a clear vision and understanding of where we were headed. Our roadmap for becoming the first continuous design-to-development platform was drawn. To execute it, we decided to fundraise. At the time, we already knew Hetz Ventures, and the choice to go with investors who share our vision was natural. Along with Hetz, joined Zohar Gilon — an Israeli super-angel. Looking ahead, our focus is set on developers and developer-friendly code. We believe component-based systems are essential to a continuous design-to-development process. The next version of Anima will address developer-friendly code. Coming before the end of the year. Building and growing during the pandemic The Anima team has tripled in size during Covid-19. Although it was far from ideal, we decided to turn the lemon into lemonade. Anima team. Operating 100% Online, including beer night Working 100% online, we broadened our horizons and hired people from all over the world. We now have team members in Tel-Aviv, NYC, Australia, Morocco, and Barcelona — and we’re still hiring globally. We joke that wherever you are in the world, the weekend beer and ice cream will find you. Anima in the news A big thank you to our community. For allowing us to evolve using their amazing feedback and for spreading the word. To our talented team. To our investors. And to everyone else who has helped along the way. The best is yet to come.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510319.87/warc/CC-MAIN-20230927171156-20230927201156-00225.warc.gz
CC-MAIN-2023-40
2,866
21
https://www.tomwalton.blog/posts/summary-of-retry-strategies/
code
A Brief Summary of Retry Strategies The goal of this post is to help you pick the right retry strategy for your use case. This will enable you to overcome transient API request failures, without causing wider system failure. We will first consider what happens with no retry strategy, compared with using a simple linear backoff strategy. Then we will see how adding jitter to the exponential backoff strategy can solve the thundering herd problem. Finally, we will solve the retry storm problem by using the token bucket retry condition. Useful software systems are often complex. To get work done, they need to talk to other services. This typically occurs as an API request over a network, where all sorts of things can go wrong. Servers crash, network packets get lost, compute resources are depleted, and so on. While we try to minimise the occurrence of errors as much as possible, it’s impractical to eliminate them all. This means we must design our systems to handle errors gracefully. Fortunately, many errors are transient, and so we can improve the reliability of our service by retrying failed requests. Such errors include HTTP 408 (request timeout), 429 (too many requests) and 5XX (server error) response codes. However, we should only retry API requests if we are certain it is safe to do so. This means that any API request with side effects, such as resource creation, should be idempotent. This guarantees that the side effects only happen once, regardless of how many times the API is called. Furthermore, retries are selfish. They increase the chance that a given request succeeds, but at the cost of increasing the load on both your own service, and the service you are calling. They also increase the response time of the operation you are performing. If a downstream service has returned an error because it is struggling to deal with the current load, sending lots of retry requests will only make things worse. If a user is waiting on a request, it doesn’t matter if the request succeeds on the 5th retry 10 seconds later, because the user won’t be there anymore. For these reasons it is critical to consider the context in which the service is used when deciding upon the most suitable retry strategy. Strategy 1: Don’t Retry To start with let’s keep things simple and not use any retry strategy. Transient failures in dependencies will be returned as errors in our own service. This could mean a message goes to a dead-letter queue, or a user sees an error message on a UI. One benefit is that we reduce resource usage in both our service, and the service we are calling, with no chance of retries overwhelming the downstream service. If retries are rare, this simple approach may be adequate. However, in most systems this reduced reliability would cause a poor user experience, and wasted time spent debugging transient issues. Strategy 2: Linear Backoff A simple strategy is to wait a fixed amount of time before trying a request again. We can do this multiple times up to a set number of retries, or a timeout threshold. This greatly increases reliability, but it can be difficult to tune the wait time appropriately. Too small and the transient error is less likely to be resolved, and the burst of traffic could overwhelm downstream services. Too long and the total response time of the operation increases. Limited server resources such as memory, threads and connections are held for longer, reducing the total throughput of the system. Strategy 3: Exponential Backoff We want to resolve transient issues as quickly as possible, without overwhelming the downstream API. Therefore it would be ideal if we could have a very short wait time between retries to begin with. And, if this first retry fails, we can use progressively longer gaps between retries. Eventually, after a certain time threshold, we would give up. This is exactly what exponential backoff gives us. Considering the exponential formula: y = a * b^n, we can tune the initial retry wait time, a, and the scaling factor, b, to change the retry wait time, y, for a given attempt, n. For example, Figure 1 shows us how the wait time grows if we wait 10 ms at first, and then quadruple the wait time on each attempt. Figure 1 - Exponential backoff function for a=10, b=4 The Thundering Herd Problem Exponential backoff is all well and good when we’re only worrying about one client. However, in practice we likely have hundreds of threads making API calls. Furthermore, traffic often comes in bursts, rather than being evenly distributed over time. This could be due to an upstream batch process, a scheduled job or a news event that triggers an influx of users. As Figure 2 illustrates, this leads to lots of requests at certain intervals, and then large gaps between these intervals. This is a very inefficient use of resources. Figure 2 - Distribution of requests when using exponential backoff retry strategy. 100 requests, a=10, b=4, 1 % randomness factor applied to wait times to simulate real-world variability. Bucketing in 20 ms increments. A common solution is to add jitter to the wait time. This adds randomness to the wait times, to spread the retries out more evenly over time. The underlying exponential behaviour is still there, but sharp peaks from bursts of traffic are smoothed out. For example, in Figure 3 we apply a random scaling factor between 0.5 and 1.5 to the wait time. For the final retry attempt, this reduces the maximum concurrent requests from 35 in Figure 2, to 5 in Figure 3. Figure 3 - Distribution of requests when using exponential backoff retry strategy. 100 requests, a=10, b=4, 50 % randomness factor applied to wait times to simulate jitter. Bucketing in 20 ms increments. Exponential backoff with jitter works fine when considering a single service in isolation. However, often a single API call made by a user results in a chain of API calls downstream. Let’s think about what happens if all these services implement retries, and the final service in the chain is starting to fail due to a temporary spike in load. Figure 4 - Four service calls in a chain When service E fails, service D will retry the request up to its retry limit. If all these retries fail, service C will retry the request to service D up to its limit, and so on. It’s clear that the retry behaviour is exponential in nature: y = x^n, where the total number of calls, y, is the number of calls in each service, x, raised to the power of the number of service calls in the chain, n. Note that x is equal to the number of retries + 1. For example, with 2 retries per service call, and 4 service calls in the chain, that’s 81 API calls to service E. And remember, this is per request to service A. So if there’s a spike of even 1000 requests to service A, service E will get bombarded with 81,000 requests. This is the difference between service E throttling a few incoming requests, then dealing with them a short while later; and total failure of service E. We have seen how retries can be useful when failures are rare, but they can wreak havoc when failures are common. To solve this problem, it would be good to have some sort of bi-modal behaviour: use retries freely when failures are infrequent, but aggressively limit retries when failures become frequent. This is what the Token Bucket Retry Condition gives us, used in the AWS SDK. This is how it works: - We start with a bucket containing, for example, 10 tokens. - A successful retry does not consume any tokens from the bucket. - A failed retry attempt consumes 1 token. - When the bucket is empty, we cannot perform any retries. However, we can still perform the initial request. - Only successful requests may refill the bucket. For example, we might refill 1 token for every 10 successful requests, up to the bucket’s maximum capacity. Figure 5 - Illustration of how failed and successful requests remove and add tokens to bucket when using Token Bucket Retry Condition This means that when the bucket is empty, only the first API request is sent out, but no retries. This solves the retry storm problem, as the equation for the total number of retries becomes y = 1^n, which is always 1, regardless of the number of services in the chain. It’s important to note that the retry behaviour is now stateful, thereby increasing the complexity. Let’s say we have 10 servers in our service, and we store the token bucket state in memory on each one. It will take 100 retry failures across the service to empty all buckets; with 20 servers, 200 failures. Therefore you should take the number of compute environments into account when configuring the parameters for the bucket. This is especially true when using Function-as-a-Service compute environments such as AWS Lambda. Here, you can have thousands of concurrent compute environments, and there are no guarantees on how long the state in each one will be retained for. Another related solution is the Circuit Breaker pattern, as explained by Microsoft. The key difference is that it blocks all API requests during its failure mode, rather than retries only. This can extend the time to recovery. We have seen how a simple linear backoff retry strategy helps us to recover from transient issues, but has its limitations. The exponential backoff with jitter strategy lets us recover quickly, while avoiding the thundering herd problem. Adding a token bucket retry condition can then mitigate the retry storm problem. These last retry strategies are certainly more complex, but they offer a strong return on investment. The key point is that you should choose the right retry strategy for your particular use case. It’s worth emphasising that retries are selfish. You are trading your limited system resources for increased reliability and a better user experience. - Amazon Builder’s Library: Timeouts, retries, and backoff with jitter - Software Reliability, Jiantao Pan - Amazon Builder’s Library: Making retries safe with idempotent APIs - Google Cloud Docs: Retry Strategy - AWS Architecture Blog: Exponential Backoff And Jitter - Wikipedia: Thundering herd problem - AWS Developer Tools Blog: Introducing Retry Throttling - AWS SDK: TokenBucketRetryCondition - Microsoft Azure Docs: Circuit Breaker pattern
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00225.warc.gz
CC-MAIN-2023-06
10,267
43
https://tutorialmeta.com/question/deploying-laravel-with-docker-containers
code
Deploying Laravel With Docker Containers I plan to deploy my Laravel application with docker containers. I need the following components for my application: - MySQl server - nginx server - cerbot for activation of ssl - Queue worker for Laravel Since the application is still under development (and probably will always be), it should be very easy to update (I will automate this with GitLab CI/CD) and it should have as little downtime as possible during the update. Also, I want to be able to host multiple instances of the application, whereby only the .env file for Laravel is different. In addition to the live application, I want to host a staging application. My current approach is to create a container for the MySQL server, one for the nginx server and one for the queue worker. The application code would be a layer in the nginx server container and in the queue worker container. When updating the application, I would rebuild the nginx conatiner and queue worker container. Is this a good approach? Or are there better ways to achieve this? And what would be a good approach for my mysql server, nginx server, php version,... to stay up to date without downtime for the application? The main idea of the docker is to divide your app by containers. So yep it is good to have one container for one service. In your example, I suggest keeping MySQL in one container the queue worker in another and so on. As a result, u will have containers for each service. Then suggest to create the internal docket network and connect containers to them. Also, I suggest using docker volumes to store all your application data. To make it much easyer I suggest for configuration to use docker compose. - → "failed to open stream" error when executing "migrate:make" - → October CMS Plugin Routes.php not registering - → OctoberCMS Migrate Table - → OctoberCMS Rain User plugin not working or redirecting - → October CMS Custom Mail Layout - → October CMS - How to correctly route - → October CMS create a multi select Form field - → October CMS - Conditionally Load a Different Page - → How to disable assets combining on development in OctoberCMS - → October CMS - Radio Button Ajax Click Twice in a Row Causes Content to disappear - → OctoberCms component: How to display all ID(items) instead of sorting only one ID? - → In OctoberCMS how do you find the hint path? - → How to register middlewares in OctoberCMS plugin?
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00830.warc.gz
CC-MAIN-2022-49
2,445
25
http://www.devhardware.com/c/a/Computer-Systems/Fundamentals/8/
code
Many first-time system builders are haunted by the question, “What if it doesn’t work?” Or, worse still, “What if it goes up in flames the first time I turn it on?” Set your mind at ease. This isn’t rocket surgery. Any reasonably intelligent person can build a system with a high degree of confidence that it will work normally the first time it is turned on. If you use good components and assemble them carefully, you’re actually less likely to encounter problems with a home-built system than with a prebuilt mail-order system or with a system off the shelf from your local superstore. Shipping can be tough on a computer. We always pop the cover of PCs that have been shipped, and often find that something has been jarred loose. Our editor reports that when he shipped a PC to his parents, it arrived with the AGP card completely out of its slot. Not good. Even worse, shipping can cause the CPU cooler to break loose, particularly on AMD Athlon XP systems. A heavy heatsink rattling around can do some serious damage to other components, but even that’s not the major concern. Running a system without a CPU cooler causes an Athlon XP to go up in smoke in seconds, literally. If someone ships a system to you, always open it up and verify that everything is properly connected before you apply power to the system. Still, problems can happen. So, while it would take a whole book to cover troubleshooting in detail, it’s worth taking a few pages to list some of the most likely problems and solutions. Fortunately, it’s easier to troubleshoot a newly built system than a system that’s been in use for some time. Fewer things can go wrong with a new system. You can be certain that the system is not infected with a virus or malware, for example, and driver problems are much less likely on a new system because you have all the latest drivers installed. The best time to troubleshoot is while you’re building the system. A good carpenter measures twice and cuts once. Take the same approach to building your system, and you’re unlikely to need any of this troubleshooting advice. As you build the system, and then again before you apply power for the first time, verify that all cables are oriented and connected correctly. Make sure expansion cards, memory modules, the processor, and so on are fully seated, and that you haven’t left a tool in the patient. Each project system chapter includes a final checklist. Verifying the items on that checklist eliminates about 99% of potential problems. Possible problems fall into one of four categories: easy versus hard to troubleshoot, and likely versus unlikely. Always check the easy/likely problems first. Otherwise, you may find yourself replacing the video card before you notice that the monitor isn’t plugged in. After you exhaust the easy/likely possibilities, check the easy/unlikely ones, followed by hard/likely, and, finally, hard/unlikely. Other than sheer carelessness—to which experienced system builders are more prone than are novices—most problems with new systems result from one or more of the following: Cable problems. Disconnected, mis-connected, and defective cables cause more problems than anything else. The plethora of cables inside a PC makes it very easy to overlook a disconnected data cable or to forget to connect power to a drive. It’s possible to connect some cables backward. Ribbon cables are a particularly common problem because some can be connected offset by a row or column of pins. And the cables themselves cannot always be trusted, even if they are new. If you have a problem that seems inexplicable, always suspect a cable problem first. Fortunately, most problems with defective cables involve ribbon cables, and those are pretty easy to come by. For example, when we recently assembled a new PC, the motherboard came with two IDE cables and a floppy drive cable. The floppy drive came with a cable, the hard drive with another IDE cable, and the optical drive with still another IDE cable. That gave us four IDE cables and two floppy cables, so we ended up with two spare IDE cables and a spare floppy cable. Those went into our spares kit, where they’ll be available if we need to swap cables to troubleshoot another system. One of our technical reviewers observes: “A good flashlight with a tight beam (I use a mini Maglight) really helps to spot offset ribbon connector problems, even if workspace lighting is otherwise adequate. I’ve done systems where a handheld magnifier became an indispensable tool.” Configuration errors. Years ago, motherboards required a lot more manual configuration than modern motherboards do. There were many switches and jumpers, all of which had to be set correctly or the system wouldn’t boot. Modern motherboards auto-configure most of their required settings, but may still require some manual configuration, either by setting physical jumpers on the motherboard or by changing settings in CMOS Setup. Motherboards use silk-screened labels near jumpers and connectors to document their purposes and to list valid configuration settings. These settings are also listed in the motherboard manual. Although it is rare, we have encountered errors in the silk-screened motherboard labels or the manuals. (On one notable occasion, the motherboard labels and the manual agreed and were both wrong, which cost us several hours of aggravation.) Always check both the motherboard labels and the manual to verify configuration settings. If the motherboard maker posts updated manuals on the Web, check those as well. Incompatible components. In general, you can mix and match modern PC components without worrying much about compatibility. For example, any hard drive or optical drive works with any IDE interface, and any ATX12V power supply is compatible with any ATX12V motherboard (although the power supply may not have adequate power). Most component compatibility issues are subtle. For example, you may install a 1 GB memory module in your new system, but when you power it up, the system sees only 256 MB or 512 MB because the motherboard doesn’t recognize 1 GB memory modules properly. All of the components we recommend in the project system chapters are compatible with one another, but if you use other components it’s worth checking the detailed documentation on the manufacturers’ web sites to verify compatibility. Dead-on-arrival components. Modern PC components are extremely reliable, but if you’re unlucky one of your components may be DOA. This is the least likely cause of a problem, however. Many first-time system builders think they have a DOA component, but the true cause is almost always something else—usually a cable or configuration problem. Before you return a suspect component, go through the detailed troubleshooting steps we describe. Chances are the component is just fine. A healthy PC finishes the POST (Power-On Self-Test) with one happy-sounding beep. If you hear some other beep sequence during startup, there is some sort of problem. BIOS beep codes provide useful troubleshooting information, such as identifying the particular subsystem affected. Beep codes vary, so check the motherboard documentation for a description of what each code indicates. Here are the problems you are most likely to encounter with a new system, and what to do about them. This chapter is from Building the Perfect PC by Robert Bruce Thompson and Barbara Fritchman Thompson (O'Reilly, 2004, ISBN: 0596006632). Check it out at your favorite bookstore today. Buy this book now. KEITHLEE2zdeconfigurator/configs/INFUSIONSOFT_OVERLAY.phpzdeconfigurator/configs/ OFFLOADING INFUSIONSOFTLOADING INFUSIONSOFT 1debug:overlay status: OFF overlay not displayed overlay cookie defined: TI_CAMPAIGN_1012_D OVERLAY COOKIE set: status off
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946011.29/warc/CC-MAIN-20180423125457-20180423145457-00444.warc.gz
CC-MAIN-2018-17
7,830
18
https://www.devrel.jobs/jobs/developer-relations-manager-46
code
At Pachyderm, we're building an open-source enterprise-grade data science platform that lets you deploy and manage multi-stage, language-agnostic data pipelines while maintaining complete reproducibility and provenance. Our system, developed with open source roots, shifts the paradigm of data science workflows by providing reproducibility, data provenance, and opportunity for true collaboration. Pachyderm utilizes modern technologies like Docker and Kubernetes to build an entirely new method of analyzing data. Offered both as an in-house solution as well as hosted service, Pachyderm brings together version-control for data with the tools to build scalable end-to-end ML/AI pipelines while empowering users to use any language, framework, or tool they want. If you want to learn more about our grand vision, read what has become our "manifesto” and take a look at how some of our customers are using pachyderm here. You can also check out our product on GitHub because it’s open-source and try our cloud service for free. What it’s like being part of The Pach Pachyderm is a rapidly growing, Series B company funded by the top VC’s — Benchmark, Decibel, M12, and YCombinator. Pachyderm has always and will always embrace a “Remote-first” approach to growing our team. This allows us to hire a diverse group of individuals across the country (and world!) while giving our team members the flexibility to work from anywhere. Being a member of The Pach means joining a supportive team that cares about you, values kindness and works hard to create an open and transparent workplace. Pachyderm is growing and by joining you will be able to get in right at the ground floor and have an enormous impact on the success and direction of the company and product. Pachyderm is looking for a Developer Relations Manager (see also Sr. Developer Advocate) to be the face of Pachyderm to our community and users. You’ll work across marketing, content, evangelism, and product teams to write new content and engage with users and prospects online. You’ll establish strong relationships with data scientists and ML Engineers around the world in a scalable way, by crafting great resources to help them build with Pachyderm, including open source demos, code samples, documentation, how-to guides, blog posts, and demo hours. You’ll foster developer communities, championing their interests and translating their feedback into actionable product insights. Primary Job Responsibilities Significant equity, 401k and full benefits (100% medical, 99% dental and vision, 50% for all dependents). Flexible PTO - work/life balance is important and we want you to take time off to rejuvenate! Remote friendly- we were remote before remote was cool and we intend to continue to invest in a remote first culture. Tons of fun swag and surprise packages sent to your doorstep. Tech and office stipends - what you buy is yours to keep. Education and donation stipends - we want to support your career growth and the community. Supportive parental leave (see also: work/life balance). Encouraged fun - game days, fun activities, zoom hangouts and more (and - when responsible - visits to our home base for team on-sites) We can’t wait to meet you and hope you’ll join our PACH! Developer Relations Manager
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00510.warc.gz
CC-MAIN-2023-14
3,305
18
https://askleo.com/backup-drive-full/
code
No matter what tool you use, a properly configured backup system collects more and more data in the form of backups as time goes on. If you’re backing up to an external drive, eventually it will fill up, no matter how big it is. What to do is easy: delete stuff to make room. What stuff to delete depends on the type of backup you’re doing, how long you want to keep things, and what other storage options you want to use. Become a Patron of Ask Leo! and go ad-free! Dealing with a full backup drive Barring other action, backup drives eventually fill up. - You can delete full backups you know you’ll no longer need. - You can delete incremental backups only in sets that accompany the full backup on which they’re built. - You can delete older backups if all you care about is recent crash recovery. - You may want to keep some older backups from which to recover files further back in time. Backup tools can automate much, if not all, of this process. Full backups are big, but easy I’ve previously discussed the differences between full and incremental backups. In short: - A full backup is a backup of absolutely everything on your disk at the time the backup is taken. - An incremental backup includes only those things that changed since the immediately preceding backup (full or incremental). If you take full backups each time, then while your disk will fill up much (much!) quicker, your options are simple: just pick which you no longer need and delete them. Each backup stands on its own, so there’s nothing to worry about. Perhaps keep backups from yesterday, last week, and last month, and delete everything else. If you’re taking incremental backups, as I suspect most do, things get a bit trickier. Incremental backups save space, but are more complicated An incremental backup is fairly useless on its own. It relies on all the incremental backups coming before it, all the way to the most recent full backup, to create a picture of the data being backed up. Thus, when you want to clean up, you need to make sure to keep all of those. The best way to think of it is that the initial full backup, and all the incrementals after it, comprise one backup set. You can delete the set, or not, but only as a complete set. If all you’ve been doing is incremental backups since day one, then you can’t really safely delete anything. Deleting the initial full backup, or any of the incrementals, would cause all the backups in the set taken after the deleted file was created to be invalid. You don’t want to randomly delete incremental backups on their own. A blended approach is best I recommend a blended approach: take periodic full backups and more frequent incrementals. Each time you take a full backup, you “reset the clock”. That full backup stands on its own, and any of the preceding backups, incremental or otherwise, can be safely deleted depending on your own needs and plans. Many backup programs (including my recommendations, Macrium Reflect and EaseUS Todo) can automate this periodic full and daily incremental approach. It’s my approach. Once a month, I take a full backup of my machine, and every night, an incremental. As backups accumulate, the backup software has been configured to automatically delete the oldest backups to make space if needed. I generally have two or three months of backups I can refer to, should I need to. What are your backups for, anyway? Once we know what’s safe to delete from a technical point of view — anything prior to the most recent full backup would be a good rule of thumb — we need to think about why we back up. The most common need for a backup is to recover from a system crash. Your system dies, and you restore the most recent backup. In my case, if my machine dies, I can restore it to whatever state it was in when the most recent incremental backup was taken — the middle of the preceding night. If that’s all you expect to need, or the only case you care about, then that rule of thumb defines what’s safe to delete — anything prior to the most recent full backup — and the minimum of what you need to keep — the most recent full backup and all subsequent incrementals. That will always allow you to restore as needed to the most recent backup in case of a sudden catastrophe. But backups can be useful for more than that. Make your safety net larger It’s not uncommon to want something older than the most recent backup. Perhaps you installed malware and didn’t realize it for a few days. Perhaps you want to recover a document you deleted last month. Perhaps you’d like to restore your machine to the relatively pristine state it was in shortly after you received it. All of these scenarios, and more, can be accommodated by keeping the appropriate backups. You don’t have to keep them all; just a select few. As one example, you might: - Keep all incremental backup sets for two months. - Keep all the monthly full backups for roughly three months. - Keep the quarterly backups (the monthly full backup from January 1st, April 1st, July 1st and October 1st) for six months or a full year. - I keep the yearly backup (that January 1 full backup) for as long as I can. - I keep the very first full backup for as long as I have the machine. If my math is right, that’s a storage requirement of about 6-10 times the size of a full backup throughout the year, growing by one each year as you keep a relatively permanent archive. That strategy might be overkill. You might want something else, depending on your need and how far back in time you want to go. I’d recommend at least keeping everything for a couple of months, and keeping your very first full backup for use as an alternative to reinstalling Windows from scratch, should you ever need to. What you should do What you actually need to keep depends on your own needs and plans. At a minimum, the most recent backup, of course. But if you envision using backups for more than just restoring to yesterday’s machine or grabbing a file you just deleted by accident, you might consider setting up a system that allows you to keep a few of those snapshots as you move forward. Subscribe to Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week. I'll see you there! Download (right-click, Save-As) (Duration: 9:08 — 10.4MB) 8 comments on “What Do I Do If My Backup Drive is Full?” I use an external backup system -a 500 GB MY BOOK-that I am happy with using. My question is: should it be turned OFF when I am not just backing up my information? I backup my photos/documents and music files. Is this system at risk of a virus/spam/malware etc if I leave it on just as the C drive is? Incremental backups normally work by backing up files for which the archive attribute has been set. Unfortunately, this attribute is set by many programs when they open a file, even if they don’t change it. Also, programs (including the operating system) often make ‘housekeeping’ changes that you don’t necessarily need to keep. This can greatly increase the size of your incremental backups. I try to back up only files that I intentionally change. It’s quite difficult to do this, so I won’t go into details, except to say that I find the program xxcopy, which is a much more powerful alternative to xcopy, very useful in this regard. As a result, my daily incremental backups are usually only a few megabytes in size, whereas Leo says, in another article, that his are a few gigabytes. (I don’t doubt that he works harder than I do, but probably not that much harder :-).) Periodically, I copy my incremental backups from the hard disk to DVDs, before deleting them from the hard disk, but retaining indexes to them on the hard disk. This way, I can get back any version of a file I’ve worked on (except, of course, ones that were overwritten during a single day’s work). Re. “What do I do when my backup drive fills up?” I’m using Vista Business, and do daily backups to an external hard drive (we’ll call that BU Drive #1) using the Microsoft BackUp program built in to Vista. I have a second external hard drive which I also back up to (we’ll call that BU Drive #2) , and that one is kept off-site ie when I leave the building it comes too. Back Up Drive #1 is rapidly filling up. I would have thought that I could just delete all backups on it and then start again with a full-system backup, but it appears that the operating system on the computer keeps track of all the backups done previously and assumes that they still exist on the backup drive. Is this the case? Can I just wipe the backup drive and start afresh with a new complete backup, or must I delete the old backup log files on the computer too? If so, where would I find the log files? Thanks, Steve Hunter_Melbourne Australia My internal disk (750GB) is about to become full due to a tremendous amount of video from my camcorder. I’m thinking of moving all my video to an external drive to free up my internal drive for other things. If I do this, what is the best to backup that external drive? Great Leo! The first clear (jargonless) strategy I’ve heard, on incremental & full backup, even from the manufacturer’s forum. Thanks a GB! Will Macrium delete an old backup when the backup drive gets full? If not, is there any software that will automatically remove old backups from the backup drive when it gets full? You write about Full and Incremental backups. What about using Differential Backups during your month long example? It’s an alternative, but my take is that they generally take more space in aggregate than they’re worth, so rather than adding to the confusion already experienced by most I settled on incrementals only.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647614.56/warc/CC-MAIN-20230601042457-20230601072457-00577.warc.gz
CC-MAIN-2023-23
9,785
64
https://www.parasoft.com/products/parasoft-soatest/
code
Join us on December 12! MISRA C++ 2023: Everything You Need to Know Register Now >> Parasoft SOAtest integrates directly into your CI/CD pipeline and, when coupled with Parasoft Virtualize for API virtualization, it makes continuous testing in DevOps a reality. Gain insights into your functional testing results with the informative dashboards and diagrams, detailed reports, and intelligent analytics provided by Parasoft CTP and DTP. Enables organizations to associate risk to business requirements by correlating static analysis findings and test results with requirements, user-stories, and defects. Test results can be both sent to Jira and aggregated within Parasoft DTP for full bidirectional traceability. In addition, creation of new issues and defects can be automated based on review and triage of test failures and static analysis violations in Parasoft DTP.Learn more. Parasoft’s integration with GitLab’s DevOps platform enables organizations to develop and deliver software faster and efficiently. It enables users to run Parasoft C/C++test, Jtest, or dotTEST static code analysis, unit testing, and structural code coverage as part of their highly automated CI/CD platform.Learn more. Enables teams to integrate Continuous Testing into their CI infrastructure. The Parasoft Findings plugin enables results from Parasoft analysis and testing tools to be integrated into Jenkins reports and gate build and release pipelines based on those test results. The Parasoft Environment Manager plugin enables the rapid configuration of virtual test environments and execution of test jobs.Learn more. Parasoft Jtest can automatically create and execute JUnit tests, capture code coverage as they execute, and execute an optimized set of unit tests based on code changes. Parasoft Selenic can create JUnit-based Selenium tests that use the page object model from recording, self-heal the tests as they execute, and produce recommendations for how to update the tests. Parasoft SOAtest users can execute JUnit-format tests in conjunction with the various other types of tests available to run with Parasoft SOAtest.Learn more. Enables support for the Apache Kafka transport to applicable messaging client tools in Parasoft SOAtest and message responders in Parasoft Virtualize, allowing users to take full advantage of Parasoft SOAtest and Virtualize’s rich interface when configuring, sending, emulating, and validating messages sent over Kafka.Learn more.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100972.58/warc/CC-MAIN-20231209202131-20231209232131-00607.warc.gz
CC-MAIN-2023-50
2,468
9
https://venublog.com/2007/04/16/yahoo-bugs-video-test/
code
I am trying to test how I can link videos to my site as I am unsuccessful from Yahoo! Video due to various reasons. Lets see if it works, so that I can share some files. Here is a small (~90MB) video on Yahoo! Beta Mail bug; after I login, it goes directly to Home page rather than to Inbox as indicated in the settings. Its like a random, otherday it was going to Inbox; and today it started going back to Home page:
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00767.warc.gz
CC-MAIN-2022-49
417
2
http://stackoverflow.com/users/1062811/andy-thomas
code
Apparently, this user prefers to keep an air of mystery about them. 24 KnockoutJS - Observable Array of Observable objects Apr 6 '12 14 KnockoutJS - Observable Array of Observable objects Apr 7 '12 6 EntityFramework to Json workaround? ( A circular reference was detected while serializing an object of type…DynamicProxies) Dec 1 '11 4 Where is the host and port coming from when I am reading Direct Line API doc from Microsoft Bot Framework Apr 13 1 Error generating ScaffoldingConnectionFactory Nov 23 '11
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824133.26/warc/CC-MAIN-20160723071024-00224-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
509
6
https://www.colibriforazero.com/post/azero-domains-ama
code
Get ready for the future of domains with insights from the AZERO Domains. 1. Can you kindly give us a little brief about yourself and AZERO Domains? AZERO Domains is the official domain service for the Aleph Zero ecosystem and the first name service built in ink! 4.0 for the future of WASM-based Smart Contracts. We believe that on-chain domains can tremendously improve the user experience in web3. Instead of struggling with non-intuitive alphanumerical wallet addresses, users can use human-readable domains, such as alice.azero, to send and receive assets. Additionally, a domain allows you to create your own recognizable on-chain identity, enabling many interesting use cases in the areas of on-chain governance, social networks, and communication. Get more information at: 2. Given the landscape where numerous L1s are coming up, why did you decide to build on AZERO? There are many reasons why we chose to work with the Aleph Zero team. First, we have been following their project since early 2021 and were impressed by their technical vision. They have identified key challenges facing DLTs today and have a clear and pragmatic plan to address them, including on-chain privacy. We are currently exploring different ways to enable privacy-preserving domains, and most of these can only be achieved with privacy incorporated in the base layer. This will enable us to provide a unique product that can solve various problems users face when interacting with other on-chain domain services. In addition, the low latency of Aleph Zero enables a web3-like user experience on the application layer and makes it easier to onboard new users. Another significant advantage of building on Aleph Zero is the extensive support that ecosystem projects receive from the Aleph Zero foundation. This support ranges from R&D and design to legal and security considerations. Soon, the Aleph Zero foundation will launch an ecosystem accelerator program that includes a grant & ecosystem fund to provide even better support for new projects building on Aleph Zero. 3. What is the main problem that AZNS is trying to solve? The main problem is user experience (UX). Alphanumeric wallet addresses are cumbersome to use and make blockchain interactions error-prone. Users have to check their addresses multiple times before making a transaction. In addition, sophisticated phishing attacks are on the rise. The Metamask team recently warned users of "Address poisoning," a process in which scammers use vanity generators to create similar-looking addresses and then send a zero-value transaction to a user, poisoning the transaction history and increasing the likelihood that users will copy/paste the wrong address from the transaction history when making a new transaction. Furthermore, we are working on solving the problem of domains exposing your entire on-chain activities. Once someone has created a brand around their name, they currently have to use several unlinked wallets to hide on-chain activities that are not meant to be publicly connected to the on-chain brand via the domain. We will leverage Aleph Zero's Liminal to obfuscate the connection between the address and domain 4. Congratulations on setting up a validator, its success and your effort to achieve decentralise AZERO network-congratulations especially on achieving TVS to >3 million which is not an easy feat, all the nominators would get a whitelist for a 5+ character domain to register, is there going to be any incentive for the nominators after the mint? Thank you! We are very happy that more than 4.1 Mio in AZERO is staked with our validator. We believe that the combination of industrial-grade security together with our partner Swiss Staking AG and the direct support for ecosystem builders is really attractive for many nominators. When it comes to rewards after the launch of our project, we have several ideas to ensure that nominators who are supporting us will receive interesting rewards. But we also believe that the main argument to choose a specific validator should be its performance in the network. 5. Did the team raise any funds via seed or private sales? Are you planning to do any and if so how much are you planning to raise? Can you shed some light on tokenomics for AZNS? We were the first project to receive a larger monetary grant for developing a production-ready application for the Aleph Zero mainnet. This was after we successfully attended the ETHWarsaw hackathon, where the idea and development of the project kicked off. Regarding long-term funding, we are exploring different ways to ensure the continued development of AZERO Domains is secured and that we have the resources to deliver our vision. 6. How do you plan on making AZNS decentralised? Is there a DAO mechanism in the roadmap? From the outset, we recognized the importance of community governance for a core primitive like an on-chain name and identity service. The community is the primary driver of value creation for the project, based on shared social consensus to support the name service. Therefore, transitioning to decentralized governance with efficient DAO structures is a goal on our roadmap, and we will gradually work towards achieving it. Our team has prior experience with DAOs and their design in the Ethereum ecosystem, and we aim to promote the value of decentralization in the Aleph Zero ecosystem as well. 7. Could you tell us about the registration process for domains? Is the pricing going to be based on the number of characters and the duration of ownership? We are currently working on innovative ideas for domain pricing. Upon launch, users will be able to register 5+ character domains at a fixed base fee, which will be in a similar price range as other domain services like ENS (between $5-$10). However, we will later introduce a demand-based pricing mechanism for renewal fees to prevent heavy domain squatting. This will result in an increased renewal fee for domains that attract high demand. To provide domain owners with strong ownership guarantees, it will be up to the user to either register a domain for one year at the base fee and then pay the demand-based pricing fee for the second year, or register the domain directly for up to three years and pay a premium for the additional years on top of the base fee upfront. Rare domains, such as 3- and 4-character domains, will be sold via an auction mechanism that we aim to release together with our own native domain marketplace sometime after the initial launch. 8. Assuming that I minted a .azero domain, apart from the mint, am I contributing any value to the ecosystem? The more users in the Aleph Zero ecosystem make use of domains, the easier the overall ecosystem experience will be. Additionally, the domains themselves are only the starting point for our broader product strategy. We plan to expand into the field of private identity and explore different use cases enabled by this powerful identity primitive. We hope that external developers and projects will make use of this technology and integrate it into their products to further advance the ecosystem. 9. What is the first short-term goal (once gone live) and long-term vision for AZNS and what would your solution or strategy be to scale it beyond crypto and have real-world adoption? Our short-term goal is to achieve a high level of adoption within the Aleph Zero ecosystem. To this end, we are in contact with all ecosystem projects to promote integrations and partnerships. We believe that this will significantly improve the user experience on Aleph Zero. In the mid-term, we aim to release the very first version of privacy-preserving domains powered by Aleph Zero's Liminal. Additionally, we plan to expand beyond Aleph Zero and push towards the wider Substrate/WASM ecosystem. Regarding real-world adoption, we will tackle this gradually by starting with on-chain use cases such as on-chain identity aggregation. From there, we will push step-by-step into real-world adoption by addressing existing problems and use cases. Privacy will play a crucial role in this, as most real-world use cases involve sensitive information and are not possible without strong privacy guarantees. 10. Variable demand-based pricing algorithm approach has been taken for the adjustment of the renewal fee for a specific domain and its respective value. Can you kindly explain the variable demand-based pricing algorithm for the uninitiated using an example and what your approach towards it was? The basic idea behind demand-based pricing is to move from fixed pricing to a flexible pricing approach that is more efficient in pricing domains closer to their fair value. However, implementing such a system can be challenging, as it requires balancing the need for strong ownership guarantees with the ability to disincentivize domain squatting. Consider the example of Bob, who owns the domain amazon.azero. He believes this domain will become valuable once Amazon launches on Aleph Zero. When Amazon decides to join Aleph Zero and aims to purchase amazon.azero, Bob declines their offer of $10,000 and asks for $1,000,000 instead. He has no costs other than the initial yearly base fee of $5. By not selling the domain and just paying the base fee, Bob is extracting value from AZERO Domains and the community, as he is paying a price much lower than the fair value of the asset. Furthermore, he is blocking Amazon from using the domain. To address this issue, we will implement a simple demand-based pricing strategy. The price of the renewal fee could for example increase by 1% of the highest bid for the specific domain. In the case of amazon.azero, the $10,000 bid would increase the renewal fee from $5 to $105. This would create 21 times the value for AZERO Domains and increase the cost of holding the domain for Bob, disincentivizing squatting. Bear in mind that this is just an example and the actual implementation might look different. 11. You mentioned that you reserved some names for ecosystem projects and advocates. While these are agreed upon for most, how are you determining the value to who it should be reserved? e.g., jump capital is well known in the space, let's say I want to reserve jump as a nominator in the $AZERO ecosystem and would you reserve 'jump.azero' for them or me? What determines the procedure for reservation by the team? We are aware of the complexity surrounding domain reservation. Our current methodology includes reserving domains for ecosystem projects that are verified with Aleph Zero, as well as for community-facing foundation members and community validators. Our goal is to reduce the risk of impersonations within the Aleph Zero ecosystem. Furthermore, we will try to reserve the name of any existing project that is vetted by the community. However, we will not reserve names for projects that are solely created to reserve a domain that already has high value. Additionally, we will not reserve brands or names for projects that are not yet on Aleph Zero. We believe in an open and free domain market, and our demand-based pricing will enable an efficient secondary domain market in the future, leading to fair domain allocation and value creation for the ecosystem. 12. Can you share with us any information on steps taken towards security and audits of AZNS? We have just started to prepare the auditing process with Kudelski Security, one of the most prominent security companies in the world. They recently announced a strategic partnership with Aleph Zero to audit and support ecosystem projects with security advice. Further, we are in close contact with the ink! development team at Parity and already received code feedback from the security company that is auditing ink! itself. 13. Could you share a few words with the community? I can already state that our vision expands far beyond being a simple domain service. We believe in the power of on-chain identities and want to leverage these for crypto-native but also real-world use cases. We will soon provide more information on our vision and how we plan to become one of the key identity providers in the Substrate ecosystem.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816734.69/warc/CC-MAIN-20240413114018-20240413144018-00731.warc.gz
CC-MAIN-2024-18
12,159
35
https://forum-en.msi.com/index.php?topic=130357.0
code
I will check that sticker in the late afternoon when I get home. Okay, do that. Once you know the version, download the BIOS Archive that contains exactly that version from the MSI Product site for your board and extract the BIOS File from it (A7513IMS.xyz). There are two recovery attempts that you can try (they only have a chance to work, if the BIOS Boot Block is still intact (and currently, we do not really know if it is, so you have to test): For both methods, please use one stick of RAM only, unplug all hard drives and optical drives you don't need, use a PS/2 Keyboard and Clear CMOS with your PSU disconnected from A/C power power.Method I (this works perfectly on my P45 Platinum, so it is worth to try it on you board as well): - rename the BIOS File to AMIBOOT.ROM and burn it to a CD - hook up an optical SATA Drive to one of the Intel ICH10R SATA Ports of your, insert the prepared CD and press CTRL + Home continously right after starting the system on your PS/2 keyboard (hit it a couple of times to make sure the Boot Block Routines process the command at the right time to jump to the Recovery Routine - if you are lucky, a Recovery Flash will be initiatedMethod II (never tried it before, so please try all variations) - keep all hard drives and optical drives disconnected, use a PS/2 keyboard - copy AMIBOOT.ROM to the root folder of an USB Flash Drive (FAT/FAT32 file system) - plug the flash drive into one of the back panel USB Ports - hit CTRL + Home on the PS/2 keyboard after start-up and check if there is access to the USB Flash Drive (best to use one with an LED indicator) - if there is accesss to the USB Flash drive, but nothing seems to happen, try the original BIOS File (don't rename it) and see if that changes anything - in case there is no change, but still access to the USB Flash Drive, try to rename the file to A7513IMS.ROM and see if that works If none of this works for you, there is reason to believe, that the Recovery Routine was corrupted during the failed BIOS Update. In that case, you will need to contact your reseller.
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00228-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
2,076
16
https://wiki.azotel.com/2016-2qcrm-auto-close-tickets-waiting-for-customer
code
2016-2Q:CRM: Auto Close tickets waiting for customer A new field has been added to SIMPLer that will allow operators to specify after how many days of inactivity (since the last update) the tickets set as "waiting for customer" (see Fig. 1) can be closed. The field is named "Specify number of days after which "waiting for customer" tickets should auto-close" (see Fig. 2) and is available in the WISP Settings ("Settings -> Modify WISP") under the "SIMPLer Settings sub-section. Fig 1: Setting status to "waiting for customer" Fig 2: Specify number of days "waiting for customer" ticket(s) should auto-close. The Feature works as follows: - Enter number of days into the "Specify number of days after which "waiting for customer" tickets should auto-close" field (see fig. 2) and press "Update WISP" at the end of the page. - Every maintenance ticket with the status "waiting for customer" that hasn't been updated for the specified number of days will be closed. The only tickets that will not be closed are tickets of type "azotel" and "azotel-feature". - An email will be sent out to the email specified in the "email" field under the "Contact Details" of "Settings -> Modify WISP". (See Fig. 3) This email will consist of maintenance ticket IDs that were closed. Note: The first time the script runs the email might be long if a lot of tickets met the requirements. See Fig 4 for an example of the type of email that will be sent. - The script to auto-close these tickets will run once a day at 5:45am. Fig 3. Email field to which the auto-closed maintenance ticket IDs will be sent. Fig 4. Example email received after the script closes tickets. Note: Number of days is taken from the "Specify number of days after which "waiting for customer" tickets should auto-close" field (Fig. 2) A maintenance ticket closed by the feature will look something like the ticket shown in Fig 5., where the ticket status is "closed". Fig 5. Example of closed ticket by the new feature.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00172.warc.gz
CC-MAIN-2022-21
1,977
14
https://sites.google.com/site/genderguide/
code
What's the difference between sex and gender? Are they related to sexual orientation? How does gender relate to the queer acronym? - Interactive virtual course that involves the audience - 5-15 min - 38 slides The Queer Acronym What does LGBTQQIAA mean? - Automatically-timed, repeating slideshow - 1:40 min - 14 slides The author(s) have placed all presentations in the public domain and waived all copyright-related rights
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00392-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
424
12
https://blogs.msmvps.com/kenlin/2004/08/
code
This article discusses: Yes…..This Security Topic is hot. I attended many seminar from Microsoft and showing how damagerous it is and how careless we are in the past. However, I found out, most of my MCAD/MCSE students are not knowing this issue. And so…..Lucky that MSDN Magazine is talking about this in the currect Issue in Sept-2004. Hurry go and read about it, get ready to change your code and your design from now on. As WinXP SP2 is going to be released and it change a lot to your Windows XP by applying a lot of security. These including These technologies include: Read from Windows & .NET Magazine and know that finally WinXP SP2 is going to be released. It is Build 2180. For MSDN Subscriber could be able to download soon. However, for this week, Only English and German version. Early next week Japanese version will be there too. And lately next week, you may found in the Windows Update(When is the Traditional Chinese version????). This SP2 will be including .NET Framework v1.1 Runtime Library and its size will be 272Mb(Good news as it is now only optional to download the .NET Framework Runtime Library). However, From MVP Private newsgroup, I heard that there will be Build 2812 or 2813…..Well, Don’t know where/how they got the news, you may found it out once you have installed it.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00207.warc.gz
CC-MAIN-2021-17
1,313
6
http://blog.kdecherf.com/2012/04/12/oracle-i-download-your-jdk-by-eating-magic-cookies/
code
--no-check-certificateto avoid this issue. Today, I had to install the SunOracle JDK on some servers, so I visited the JDK download page to retrieve the direct download link for wget on each server: $ wget http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-x64.rpm After launching the command, the server redirects me to a 5K HTML file: Location: http://download.oracle.com/errors/download-fail-1505220.html [following] WAIT, WHAT? I can’t download the file if I don’t visit their download page to accept the OTN license. Challenge accepted, I’m going to play with requests… What happens when I click on the archive link? Request URL:http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-x64.rpm Request Method:GET Cookie:s_nr=some_integer; s_cc=true; gpw_e24=http%3A%2F%2Fwww.oracle.com%2Ftechnetwork%2Fjava%2Fjavase%2Fdownloads%2Fjdk-7u3-download-1501626.html; s_sq=%5B%5BB%5D%5D Okay, trying to play with cookie values (I deleted the useless values): $ wget --no-cookies --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2Ftechnetwork%2Fjava%2Fjavase%2Fdownloads%2Fjdk-7u3-download-1501626.html;" http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-x64.rpm --2012-04-12 18:47:58-- http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-x64.rpm Resolving download.oracle.com (download.oracle.com)... 126.96.36.199, 188.8.131.52 Connecting to download.oracle.com (download.oracle.com)|184.108.40.206|:80... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-x64.rpm [following] --2012-04-12 18:47:59-- https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-x64.rpm Resolving edelivery.oracle.com (edelivery.oracle.com)... 220.127.116.11 Connecting to edelivery.oracle.com (edelivery.oracle.com)|18.104.22.168|:443... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-x64.rpm?AuthParam=THEIR_AUTHPARAM [following] --2012-04-12 18:48:00-- http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-x64.rpm?AuthParam=THEIR_AUTHPARAM Connecting to download.oracle.com (download.oracle.com)|22.214.171.124|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 67667318 (65M) [application/x-redhat-package-manager] Saving to: `jdk-7u3-linux-x64.rpm' You can set what you want in gpw_e24, I think it’s a kind of referer. As for me, I’m using the download page URL. According to the OTN BCL document for Java SE: BY SELECTING THE “ACCEPT LICENSE AGREEMENT” (OR THE EQUIVALENT) BUTTON AND/OR BY USING THE SOFTWARE YOU ACKNOWLEDGE THAT YOU HAVE READ THE TERMS AND AGREE TO THEM.
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00183-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
2,779
15
https://userapps.support.sap.com/sap/support/knowledge/en/2956000
code
- Error "Adding or removing columns during dataset update is not allowed" while applying model in Smart Predict of SAP Analytics Cloud - SAP Analytics Cloud (SAC) - Smart Predict Reproducing the Issue - Logon to SAC. - Create a new Predictive Scenario. - Choose Classification model type. - Create and save the model. - Click Apply Predictive Model. - Select a new Input Dataset. - Set other necessary elements. - Click OK and run applying. - Click Predictive Model (1) at the bottom of the page. - Wait a few minutes, observe the Status changed to Apply failed. - Check detailed information : "Adding or removing columns during dataset update is not allowed" - The applying dataset (Input dataset) should be the same structure with the Training dataset on which the model was created. The columns should be the same number, field name ... - To trouble shoot this kind of issue, first suggest to run Apply by using the training dataset as input dataset. If this works. Then should check the applying dataset structure. If it is the same as the training dataset - Hands-On Tutorial SAP Smart Predict, Customer Churn Analysis for online retail - 2661746 - SAP Analytics Cloud Smart Predict Release to Customers Phasing Clarification - 2569847 - Where can you find user assistance (help) for SAP Analytics Cloud to use, configure and operate it more effectively? - Have a question? Ask it here on the SAP Community. Or reply and share your knowledge! - 2487011 - What information do I need to provide when opening an incident for SAP Analytics Cloud? - SAP Analytics Cloud > Learning > Guided Playlists - SAP Analytics Cloud > Learning > Guided Playlists > Getting Support PA, Infinite insight, test sub-set, training a model, classification, time series, regression, failed, apply failed , KBA , LOD-ANA-PR , SAC Predictive , BI-RA-PA , Predictive Analytics , How To SAP Analytics Cloud 1.0
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00090.warc.gz
CC-MAIN-2022-33
1,888
27