url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://ez.analog.com/thread/99980-daq2-fmc-hpc-to-lpc
code
I am using fmc-daq2 on zynq-7000 board with fmc hpc connector now. And I want to add a fmc-daq2 board too my design. The issue is, can I use fmc-daq2 board by connecting with fmc lpc connector? Is it possible by changing some constraint file? Or any other way?
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217354.65/warc/CC-MAIN-20180820215248-20180820235248-00577.warc.gz
CC-MAIN-2018-34
260
1
https://sourceforge.net/p/wxhaskell/mailman/message/20651584/
code
Red Hat Linux Click URL instructions: Right-click on ad, choose "Copy Link", then paste here → (This may not be possible with some types of ads) You seem to have CSS turned off. Please don't fill out this field. Briefly describe the problem (required): Please provide the ad click URL, if possible: Tue Oct 21 06:44:49 EDT 2008 jeremy.odonoghue@... * Small update to XRC support M ./wxc/src/eljrc.cpp -3 View patch online:
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886815.20/warc/CC-MAIN-20180117043259-20180117063259-00255.warc.gz
CC-MAIN-2018-05
424
12
https://www.resurchify.com/ed/isca-spsc-symposium-2022-2nd-symposium-on/13122
code
ISCA SPSC Symposium 2022 : 2nd Symposium on Security and Privacy in Speech Communication joint with 2nd VoicePrivacy Challenge Workshop Incheon National University, Korea |Event Date:||September 23, 2022 - September 24, 2022| |Submission Deadline:||June 15, 2022| |Notification of Acceptance:||July 01, 2022| |Camera Ready Version Due:||September 05, 2022| Call for Papers The second edition of the Symposium on Security & Privacy in Speech Communication (SPSC), this year combined with the 2nd VoicePrivacy Challenge workshop, focuses on speech and voice through which we express ourselves. As speech communication can be used to command virtual assistants to transport emotion or to identify oneself, the symposium tries to give answers to the question on how we can strengthen security and privacy for speech representation types in user-centric human/machine interaction? The symposium therefore sees that interdisciplinary exchange is in high demand and aims to bring together researchers and practitioners across multiple disciplines including signal processing, cryptography, security, human-computer interaction, law, and anthropology. The SPSC Symposium addresses interdisciplinary topics. For more details, see https://symposium2022.spsc-sig.org/home/_cfp/CFP_SPSC-Symposium-2022.pdf VoicePrivacy Challenge: https://www.voiceprivacychallenge.org/ June 15 (**Extended deadline**) Long paper submission deadline June 15 – VoicePrivacy Challenge paper submission deadline June 15 – Short paper submission deadline July 1 – Author notification July 31 – VoicePrivacy Challenge results and system description submission deadline September 5 – Final paper submission September 23-24 – SPSC Symposium at the Incheon National University, Korea Topics regarding the technical perspective include: Privacy-preserving speech communication speech recognition and spoken language processing speech perception, production and acquisition speech synthesis and spoken language generation speech coding and enhancement speaker and language identification phonetics, phonology and prosody paralinguistics in speech and language privacy engineering and secure computation network security and adversarial robustness Natural language processing web as corpus, resources and evaluation tagging, summarization, syntax and parsing question answering, discourse and pragmatics machine translation and document analysis linguistic theories and psycholinguistics inference of semantics and information extraction Topics regarding the humanities’ view include: Human-computer interfaces (speech as medium) usable security and privacy pervasive computing and communication Ethics and law privacy and data protection media and communication electronic mobile commerce data in digital media acceptance and trust studies user experience research on practice co-development across disciplines We welcome contributions on related topics, as well as progress reports, project disseminations, theoretical discussions, and “work in progress”. There is also a dedicated PhD track. In addition, participants from academia, industry, and public institutions, as well as interested students are welcome to attend the conference without having to make their own contribution. All accepted submissions will appear in the conference proceedings published in ISCA Archive. The workshop will take place mainly in person at the Incheon National University (Korea) with additional support of participants willing to join virtually. Papers intended for the SPSC Symposium should be up to eight pages of text. The length should be chosen appropriately to present the topic to an interdisciplinary community. Paper submissions must conform to the format defined in the paper preparation guidelines and as detailed in the author’s kit. Papers must be submitted via the online paper submission system. The working language of the conference is English, and papers must be written in English. At least three single-blind reviews will be provided, and we aim to obtain feedback from interdisciplinary experts for each submission. The review criteria applied to regular papers will be adapted for VoicePrivacy Challenge papers to be more in keeping with systems descriptions and results. ISCA SPSC Symposium 2022 : 2nd Symposium on Security and Privacy in Speech Communication joint with 2nd VoicePrivacy Challenge Workshop will take place in Incheon National University, Korea. It’s a 2 days event starting on Sep 23, 2022 (Friday) and will be winded up on Sep 24, 2022 (Saturday). ISCA SPSC Symposium 2022 falls under the following areas: etc. Submissions for this Symposium can be made by Jun 15, 2022. Authors can expect the result of submission by Jul 1, 2022. Upon acceptance, authors should submit the final version of the manuscript on or before Sep 5, 2022 to the official website of the Symposium. Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines. Other Details of the ISCA SPSC Symposium 2022 Credits and Sources | ISCA SPSC Symposium 2022 : 2nd Symposium on Security and Privacy in Speech Communication joint with 2nd VoicePrivacy Challenge Workshop|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00867.warc.gz
CC-MAIN-2022-49
5,315
56
https://support.hifiberry.com/hc/en-us/community/posts/207296225-Using-ameter-VU-meter-to-show-output-levels
code
I have been using this VU meter program http://laugeo.free.fr/ameter.html to show the output from media players. with vlc "vlc --alsa-audio-device=ameter sound.wav" or with aplay "aplay -D ameter sound.wav" or with mplayer "mplayer -ao alsa:device=ameter sound.wav" ameter works fine on an external usb sound card, but when I try to use ameter on the "HiFiBerry DAC+ Pro" it simply doesn't want to work. Here is a pre-complied version of ameter, compiled on "raspbian wheezy" Create the /home/pi/.asoundrc file as follows, slave.pcm 'hw:0,0' #can be hw or hw:0,1 etc... This works fine on an external usb sound card, just not the "HiFiBerry DAC+ Pro" It could be possible there is a conflict between the "/etc/asound.conf" and the "/home/pi/.asoundrc" file. Does anyone know how to write/modify the ".asoundrc" file so that ameter works on the "HiFiBerry DAC+ Pro"? PS, on another subject: if you are have trouble playing MP3s with VLC using the "HiFiBerry DAC+ Pro" Run VLC with: "vlc --aout alsa --alsa-audio-device=hw:0,0" and it will play MP3s. Make a script exec /usr/bin/vlc --aout alsa --alsa-audio-device=hw:0,0 "$@"
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00591.warc.gz
CC-MAIN-2021-04
1,124
15
https://jordy.app/
code
July 18, 2020 - 3 min read When I come across blog ideas, I add them here as a reminder to write about them. March 2, 2020 - 11 min read The first time I added Amplify to a react-native app I had some struggles to overcome. With this post I hope I can get both you and future me up and running in no-time!
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00458.warc.gz
CC-MAIN-2021-10
305
4
https://manhwa18.net/manga-silent-war.html
code
The Main character of this pornhwa is Hyun. Yeah, while reading this webtoon you need to be reminded of that. At season 1, he starts off as a typical nerdy guy. Only difference is that he doesn’t wear glasses, but he is typical beta male and shit. He is a genius because he can predict the outcomes of gambling games with the magical power of Statistics. (yeah, this series basically teaches you that studying math pays off, who knew!)Anyway, main antagonist of this series is the buff guy who is a rip off of Sakuragi from Slam dunk, Gunner. He is the alpha male who gets all the bitches. Because he is alpha male, he also has a group of young thugs following him around. Oh yeah, all these people just graduated from high school but look like Jojo characters. And they get more pussy than you.All right, time for the bitches… First is Mia, the female main character who everyone in this series hates. Not only is Gunner banging her, she is also his girlfriend. For some retarded reason, she voluntarily gave herself to Gunner to protect her from mean girl bullies. You know, instead of reporting to the principal? Shame them in social media? Dammit, and Mia is supposed to be the smartest girl at school? Whatever. Next is Sophie, yellow blonde mistress of Gunner and everyone’s favourite character. She is the hottest in this series but has the personality of a diva. They say she is tsundere but nah… she wants gunner dick.Next is Anna, Sophie’s BFF and has the largest boobs in the series. She looks like shit in season 1, but because author needed more girls, they redrew her well at season 2.Next is Lina. Gunner’s prostitute with a mysterious past. Melee weapon specialist.Finally, there is Miss Song, Hyun’s teacher who is a masochist and enjoys getting dominated by her students. Damn it, where were these teachers when I was at school?This series is titled Silent War because Hyun is silently fighting with Gunner. He knows he can’t win directly, but with the power of plot and main character development, he is sure to defeat Gunner and claim his Kingdom. You know, then it becomes… My Kingdom? Yeah, basically it. Have fun reading!
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00031.warc.gz
CC-MAIN-2022-40
2,162
1
https://community.macmillan.com/videos/1850-tactics-to-build-engaged-learners
code
In this talk, Dr. Michelle Zimmerman and student, Jennifer Fernandez, discuss how to engage learners. Jennifer brings the student perspective as a current 10th grader at Renton Prep in Seattle, Washington. A competitive ice skater with a passion for music and science education, Jennifer brings a fresh perspective to the conversation. Over the course of her impressive career, Dr. Zimmerman has taught grades from pre-K to 10th. In addition to being an assistant principal and lead teacher, she is also a Microsoft Innovative Educator Expert. In this role, Zimmerman advises Microsoft and educational institutions on integrating technology in pedagogically sound ways, and helps them build educator capacity for using tech to improve learning.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00206.warc.gz
CC-MAIN-2020-05
744
1
http://spider-care.availablehere.co.uk/videos/show/GCZ6TgbgTx8
code
5 Ways To Kill Granny's Pet Spider Like6 Dislike0 Published on31 Aug 2018 you want to know how to kill granny's pet spider ?? watch full video and enjoy! Help me here :-) :- paypal.Me/imtiyaz7 hope you enjoy watching my content !! like share and subscribe for more such videos !! FOOGANG LIVE LONG !!
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00663.warc.gz
CC-MAIN-2020-40
300
8
http://blogs.technet.com/b/security/default.aspx?PageIndex=7
code
Get on-the-go access to the latest insights featured on our Trustworthy Computing blogs. This article in our series on Microsoft’s free security tools is focused on a tool called the Microsoft Baseline Security Analyzer (MBSA). Many years ago before Windows Update was available, servicing software was much more painful than it is today. Microsoft released security updates weekly, and there were few deployment technologies available to help determine which systems needed which updates. I wrote an article on this topic if you are interested in a walk down memory lane. For those IT administrators that lived through those days, the MBSA was a godsend. Today, 10 years later, the MBSA is still a free security tool that many, many IT Professionals use to help manage the security of their environments. I have written about the threat landscape in the European Union (EU) before, focused on the first and second half of 2011. If you are interested in learning about threats Microsoft observed during 2011, please read these articles: This article is focused on threats observed in the first half of 2012. The most recent volume of the Microsoft Security Intelligence Report, volume 13, includes data on the first half of 2012, including deep dive regional threat assessments on every member state in the EU as well as 78 other locations around the world. In this article I provide a summary of the latest threat data for the EU. This week the Microsoft Malware Protection Center (MMPC) published a new threat report focused on Rootkits. A rootkit is a suite of tools used by attackers to provide stealth capabilities to malware. The typical goal of a rootkit is to enable malware to remain undetected on a system for as long as possible, in order to facilitate the theft of sensitive data, change computer settings, or compromise system resources. This morning, Adrienne Hall, General Manager for Trustworthy Computing delivered a keynote speech at RSA Europe and announced the availability of the Microsoft Security Intelligence Report volume 13 (SIRv13). It’s hard to believe that it’s been over six years since we published the first volume of the report. The report has evolved a lot since then, but our goal has always remained the same: to provide our customers with the most comprehensive view into the threat landscape so they can make informed risk management decisions. In July, we kicked off a blog series focused on "Microsoft's Free Security Tools." The series highlights free security tools that Microsoft provides to help make IT professionals' and developers' lives easier. A good tool can save a lot of work and time for those people responsible for developing and managing software. In the series we discuss many of the benefits each tool can provide and include step by step guidance on how to use each. Below is a summary of the tools covered in the series and a brief overview of each. In the first two parts of this series on the threat landscape in the Middle East (Part 1, Part 2) I focused on the threats in Qatar, Iraq and the Palestinian Authority (West Bank and Gaza Strip). In this final part of the series I focus on Israel and Saudi Arabia. The data in this article comes from the Microsoft Security Intelligence Report volume 12 (SIRv12) and previous volumes of the report. This article in our series focused on Microsoft’s free security tools is on a tool called Portqry. This tool is a TCP/IP connectivity test tool, port scanner, and local port monitor. Portqry is useful for troubleshooting networking issues as well as verifying network security related configurations. Because of this broad functionality, I have heard some Information Technology (IT) Professionals refer to this tool as a “Swiss army knife” of tools. In the first part of this series on the threat landscape in the Middle East I focused on the threats in Qatar, the location with the largest improvement in malware infection rates in the region. In this part of the series I focus on the Palestinian Authority and Iraq, the two locations with the highest malware infection rates in the region in the second half of 2011. Recently we have published articles on the threat landscape in many different parts of the world including the European Union (part 1, 2, 3), Africa, Asia (part 1, 2, 3) and Oceania. The analysis in these articles is based on data and insights from the Microsoft Security Intelligence Report volume 12 (SIRv12) and previous volumes of the report. I attended the second annual (ISC)² Security Congress, collocated with the ASIS International 58th Annual Seminar and Exhibits last week (September 10-13, 2012) held in Philadelphia and wanted to pass on some of what I saw there. Microsoft Trustworthy Computing was a sponsor of (ISC)² Security Congress and Microsoft Global Security had an exhibitor booth (seen in the picture above) on the ASIS show floor.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705300740/warc/CC-MAIN-20130516115500-00024-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
4,899
13
https://forum.fast-report.com/en/discussion/17649/subreport-alignment-issue-2-data-sets
code
SubReport Alignment issue 2 Data Sets I have tried to use Sub Reports for the first time, The data is coming from two Data Queries. The Queries are essentially the same just selecting different information based on a value. I wanted to display this information on a side by side report using Sub Reports. What i have works but i do not appear to make it line up. This is how the report is appearing This is how i want the report to appear you can see each Clerk name having its own row regardless of the number of rows each one is assigned to. The Header i added on each sub report "Clerk X NAME" just so i could verify it was aligning This is my current layout Sub Reports are just a MasterData with some additional Memo fields. Is what I am trying to achieve possible within FastReport?
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00739.warc.gz
CC-MAIN-2023-06
788
10
https://www.antstack.io/
code
We are a fullstack serverless company aiming at providing holistic solutions to get you up and running with serverless! Exploring Serverless? AntStack can help you with designing your business applications to embrace serverless stack, help you make architectural decisions keeping the cost in mind. We can also help implement the best practices of serverless computing. Design, Develop & Deploy With the team of strong UI and backend developers, we build applications on serverless platforms. We work closely with clients to understand their technology and design needs. We take care of end to end application development starting from UX to deployment and even manage it. Elevate and Transform Transform existing on-premise business application to the serverless platform. Serverless is the gateway for the hybrid cloud. The serverless platform eliminates the legacy three tiers [compute, storage and network] and helps to focus only on the business application. UX and UI as a Service User Experience and User Interface are a very important aspect of any product. Our team with a stronghold on the design aspects can help clients from ideation to product design, mockups and also develop the interfaces to achieve their end goals. An app to teach school/college kids to develop entrepreneurship by building business models. Croo is an online car service platform which gives a convenient and transparent option to fix dents, windshields and car detailing services. It’s affordable, transparent and quick. Structured on serverless with firebase backend, we deployed firebase hosting on React. Firebase Functions & Cloud Firestore. A portal and progressive web app, we help consumers buy their desired car. Progressed using the serverless framework, we enable real-time data using AWS AppSync, created frontend with React, Apollo GraphQL. The dealer app was a catch with React Native. An Open Source Agenda App that gives real-time activities held at conferences. Enabled GatsbyJs static site generator, the ready app continued functionality during network failures. AWS Amplify backend enabled feedback collection at talk level.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396163.18/warc/CC-MAIN-20200527204212-20200527234212-00107.warc.gz
CC-MAIN-2020-24
2,131
12
https://answers.sap.com/questions/8781703/infoobject-missing-in-query-designer.html
code
I created one DSO which contain contian InfoObject InfoA, InfoB and other fields . and i also create a query against this DSO. But in Query Designer, there is no InfoObject InfoA and InfoB. all other fields can be find in Query Designer! i feel confuse about this ? Anybody can give some advise about this? Thanks in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00102.warc.gz
CC-MAIN-2021-31
325
3
https://cohost.org/blep/post/75990-maybe-i-will-simply
code
maybe i will simply use this account to subject you to rabbit holes that are relevant to exactly zero people. i warned u about following me & now you will suffer the consequences!!! The Toon Boom Harmony TVG file format as you know, people often use wicked sorcery to make objects move as though they are alive, in a process called “anmiaiton.” however, sometimes they will also use “computer technology” for this purpose, such as “Toon Boom Harmony,” a relatively popular 2D animation software. I find Toon Boom Harmony is notable for its very nice vector drawing tools, and especially its vector pencil tool. The Harmony pencil tool creates strokes in a format not possible in other common vector graphics formats such as SVG: a Bézier spline with variable width. The thickness data is another Bézier spline, making this a Bézier-Bézier offset curve. This way, you can just adjust your lines freely without worrying about messing up your line thickness, and vice versa. I think the people at Toon Boom are aware that this is pretty neat, because they make you pay extra for this feature. Now, sometimes, you might wanna take your Toon Boom Harmony project and export it just a little bit. Just get that data out so you can use it somewhere else. You can render it to a raster image, but that’s no fun. You lose all the benefits granted to you by the vector format. Toon Boom Harmony also lets you export to PDF, but PDF, frankly, sucks, and it also does not preserve the strokes created by the pencil tool, because such data cannot exist in PDF (I think they get converted to outlines). So maybe what you really want is to be able to read the files the drawings are stored in themselves. And by you I mean me because most people probably really don’t care. While the Toon Boom project files are in semi-human-readable-ish XML, the drawing files are not. They’re in a proprietary binary format called “TVG” which I assume stands for “Toon Boom Vector Graphics” or something. I had a look around, and this format is not documented. It seemed nobody had tried to reverse engineer it either. So I decided I might as well have a go!! The Toon Boom Harmony license agreement forbids that you “6.1.5. modify, reverse engineer, decompile, disassemble, or create derivative works from the SOFTWARE or its proprietary source code,” so I decided not to do that because i would probably go to jail forever. Also, frankly, reversing stripped & optimized code sucks. Probably. I imagine. not that., i would have ever, done anything ofthe sort , So, I’m not a lawyer, but I imagine it would be legal to treat the software as a black box and simply examine the file format itself using files I created, since those files are not part of the software. Let’s open it up!! that sure is lots of binary data! There are several interesting things of note here already: - The file magic is probably - For some reason, TVG files contain a “certificate.” I, writing this right now, have future knowledge: this certificate is tied to your software license and is the same in every file you create. It’s identifying information, so I blurred it out. I don’t know… why… you would do this? Putting, like, a certificate of authenticity™ in every single file a user creates? It’s certainly a very strange choice. TVG seems to use a common pattern found in binary file formats: 4-byte tags, followed by a length, and then followed by the data. I think it’s so nice of them to use string tags like this instead of, like, numeric enums. I don’t know what ENDT are supposed to mean, tOAA (off-screen) are very likely the data for the four layers in a toon boom drawing (underlay, color, line, and overlay). Every tag is also accompanied by another 4-byte tag that either says Given that ZLib is a compression library, this is probably the data encoding. Since we can read the UNCO data just fine right here, this tag probably means “unencoded.” There’s also another piece of identifying information here in the TVCI tag (toon boom vector… creator… information…? maybe?): the hostname of my computer and the software name. The hostname is again a very weird thing to include. Beware of sharing toon boom projects, I suppose… Scrolling past the huge zlib blob, you can find a few more things at the end: There’s another zlib blob in the which is probably the color palette. This is followed by which seems to contain the byte offset of every listed tag, (toon boom… table… of contents…?) and finally some kind of cryptographic signature. It seems all the interesting stuff is in the ZLib-compressed data… guess it’s time to unzlib some stuff! I started with the palette data since that seemed easier. Basic Palette Data: Reversing Is So Easy I would like to note it took me an unreasonably long time to decode the ZLib data, because I thought I could just shove those bytes into ZLib and it would work. It did not work. It took me several hours to find out (including reading the ZLib specification… because I wasn’t sure this was ZLib data at all!) that the first 4 bytes of the data are not, in fact, part of the ZLib data. They are just another length value. Specifically, the length of the uncompressed data. Only the bytes after that can be shoved into ZLib and decompressed properly. So that was a thing. Anyway, un-zlibbing the palette data already reveals several immediately readable things: You can see some text like “Black” (the name of the color) and “2022-05-18” (the name of the project I stole this TVG from). Since every second byte in the text is zero, I am going to assume that this is UTF-16 LE. Again there are 4-byte tags followed by length and data, so after annotating a little bit… you can see that the color is actually just made from two tags: 00 00 00 FF, which is just the color’s actual value in RGBA format (in this case, black). TCID seems to contains identifying information like the name of the color. Toon Boom projects actually very conveniently have the palette data also available in text form inside The one for this project reads: ToonBoomAnimationInc PaletteFile 2 Solid Black 0x0a46da1a56b5abe6 0 0 0 255 Solid White 0x0a46da1a56b5abe9 255 255 255 255 Solid Red 0x0a46da1a56b5abec 255 0 0 255 Solid Green 0x0a46da1a56b5abef 0 255 0 255 Solid Blue 0x0a46da1a56b5abf2 0 0 255 255 Solid "Vectorized Line" 0x0000000000000003 0 0 0 255 That very suspicious hex number there can also be found in the data (marked green). This is probably some sort of internal color ID. The only thing left to figure out is the 10 bytes at the beginning. If I open the TPAL data for a drawing that uses two colors, the first byte at the beginning changes to a 2, so that’s probably the number of colors. And finally, for the 79 00 00 00 00 00… well, I have no idea. This seems to just be some sort of header before every color entry. It doesn’t look important, though. Well, that was easy! Surely the layer data in tLAA will be no different! Continued In: What The Fuck Is This Number Format
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00371.warc.gz
CC-MAIN-2022-33
7,081
68
http://dzone.com/links/tag/.net.html
code
This week, DZone released its latest Refcard: 13 Things Every C# Developer Should Know. If... more » Every once in a while I’m working on a feature, only to discover that I need to extend... more » In this blog post you can read how to serialize data directly from DataReader into JSON... more » In this C# tutorial I will demonstrate how to implement automatic movement to improve your... more » A few decades ago, when the king programming language was C, pointers were the saviors of the... more » I have never been comfortable with most introductory OOP texts. They use a simple Animal... more » Jeff Fritz shows how to connect to a mobile back-end as a service using Visual Studio and C#. Pretty basic, right? Just do some work forever. If there is a failure (i.e. an unhandled... more » In the spirit of “here’s something I couldn’t find an easy answer for so I’m writing it... more » Blog post describing interesting lessons learned by running software startup project. An example, demonstrating an integration between the ShieldUI Chart and Angularjs Since .NET version 4.5, the C# language has two new keywords: async and await. The purpose of... more » In our Be Sure with Azure .NET – Azure Table Storage (Part 1) we cover details about Azure... more » A Bootstrap 3 template, demonstrating social share plugin with a sample image to be shared. This pattern is usually applied when property SomeType is rarely used. It doesn’t make much... more » Aspose has released v1.1 of Aspose Java for Spring (extension of Spring’s PetClinic Sample... more » If you tried to work with geofences, you may have encountered this problem. You want to have... more » Last week we released version 0.7.2 of the Kentor.AuthServices SAML2 Service Provider for... more » When working with ASP.NET Web Api from a .NET client, one of the more confounding things can... more » ShieldUI recently added wrappers for ASP.NET MVC for its jQuery Suite. This example... more » Oracle Performance Dashboard (OPD) is a small ASP.NET website that shows you performance... more » Now that we can declare dynamic objects in C#, how should we define our APIs? Typed, dynamic,... more » The long awaited version of Aspose.Cells for Java 8.2.1 has been released. Aspose.Cells... more » On several occasions I have worked with systems that processed lots of work items with a... more » I did some pair-programming with a friend and we looked into what we thought was a simple... more » Show more links Advertising - Terms of Service - Privacy - © 1997-2014, DZone, Inc.
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507448218.19/warc/CC-MAIN-20141017005728-00347-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
2,580
27
https://www.dienarmobil.com/do-i-really-have-to-pay-someone-to-do-my-homework
code
Do I Really Have to Pay Someone to Do My Homework? Do I really have to pay someone to do my assignment for me? Paying someone to do your assignment can sometimes be tricky as you only have limited time to read it. If your assignment is even the slightest bit suspicious then, you will likely lose out on precious time to fix it yourself. It doesn't have to be about you; it might be that you're having difficulty writing your own job. You may have been told earlier that it's a difficult subject matter and that there is a huge quantity of research that goes into each article. In fact, every writer knows that writing can have a long time. So if you are looking to make some extra cash, then you may want to check at finding a way to pay someone to do your assignment for you. The simplest way to find someone to do your homework for you is to ask your colleagues or friends for a favor. If your friend's boss asks you to come in 1 day to help him in a huge project then, there is an excellent chance that he wants you to care for his assignment. If that is the case then you're in luck and you're able to take on the task of finding an expert to do your assignment for you. You will need to find some references and then you can ask around to find out what sort of offers you'll get. Another option for you to pay someone to do your assignment for you would be to approach the people that run online companies. You can use site an online company for a referral so you will know that the individual will be able to complete your homework correctly. You can also check through the web sites and see how they manage their assignments. If you do not feel comfortable http://www.theoxford.edu/college_of_science/biotechnology_faculty.htm with their service then, it's advisable for you not to work together. An online company can also be useful to you if you're not happy with their service. If you think your online company isn't giving you the service that you deserve, then you can just approach them and say that you are not pleased with their work. And you're willing to switch to a different company. If they're still open to this request then you just have to transfer your account to the other company and you'll only need to pay them once. If you are delighted with their work. If you're still not sure if it is worth the effort to pay someone to do my assignment for me, then perhaps it isn't worthwhile for you. I would recommend that you get someone to do your assignment for you anyhow. As paying somebody to do your assignment is really not worth the trouble. It would be cheaper for you to pay someone else to do your assignment and you wouldn't need to spend time doing it yourself. A good way to find someone who can do your homework for you is to visit your local library and check out the books they are using. You can often find someone there who will do all your homework for you. Also, ask your teacher and see if you're able to get advice from them as well. Your school or college can often offer you a tutor that will help you with completing your homework. So if you want to get paid to do your homework, then you should always check with your school or university.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510130.53/warc/CC-MAIN-20230926011608-20230926041608-00525.warc.gz
CC-MAIN-2023-40
3,187
10
http://www.usingenglish.com/forum/ask-teacher/3085-cusp-something.html
code
on the cusp of something ? Chandler: Is this really your long term plan, for me to run interference? Because I could get a job any day now. Ross: You do appear right on the cusp of something. Come on man, I'm sure he'll lose interest in a week or two, but for now can you please just do this for me? May I know what Ross mean in this context ? Re: on the cusp of something ? Chandler is unemployed. He feels that he'll find a job soon, so he says, "I could get a job any day now, to which Ross replies, "You do appear (to be) right on the cusp (on the verge) of (finding) something (a job). You do appear right on the cusp of something. To be on the cusp is to be on the edge of or at the point of or just about to act on or act out something. Search Engine Optimization by vBSEO
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163043499/warc/CC-MAIN-20131204131723-00009-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
779
9
http://booksplat.blogspot.com/2006/08/once-and-future-king.html
code
stoopit. And I was on an airplane with no dictionary to look up these words - making me feel trapped as well. It was not pretty. But as soon as Merlin came on the scene, I fell in love with him and the book sailed by. Until Wart pulled the sword from the stone. (I hope I am not giving anything away there.) At that point I realized - this is a heckuva long book and I still have to read a ton more. And so I took a poll. I asked every future senior I came in contact with through church or the Y or randomly running into them at the beach - "What did you think of The Once and Future King?" And after much piercing Veronica Mars-like questioning they all admitted that they liked it fine but they didn't finish it. "AHA!" I yelled (quietly to myself) "This means that I don't have to finish it either!" So I didn't. But I read enough and I promise that before next summer I will finish it. Probably. Geeze, it's long. There is something about King Arthur and LONG that must be set in stone. (Like the sword - ha!) Because every Arthurian saga is as long as a Indiana freight train. Seriously - Camelot, the musical, is nearly 3 hours long. The Mists of Avalon (A feminist version of the Camelot legend - it is great!) is a whopping 912 pages. Monty Python and the Holy Grail was short but left out, well pretty much everything but the llamas. And even Disney's The Sword in the Stone only used the first section of the book, with a bunch of other made up things. So there.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.58/warc/CC-MAIN-20180422022057-20180422042057-00012.warc.gz
CC-MAIN-2018-17
1,473
3
https://itecnote.com/tecnote/sql-updating-database-records-in-a-loop/
code
declare begin for i in (select * from emp) loop if i.sal=1300 then update emp set sal=13000; end if; end loop; end; This code is updating all the records with salary 13000. Instead i want to update records having salary 1300 to the value 13000. Can you tell where I made a mistake? I am accesing records using implicit cursor.. for every record i am checking the sal value of that record.. if salary value in a particular record is 1500 i want to update it to 15000..
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.17/warc/CC-MAIN-20231205105136-20231205135136-00596.warc.gz
CC-MAIN-2023-50
467
7
http://www.soshified.com/forums/classifieds
code
Views - 29 Time Left - 27 Days, 1 Hour in Miscellaneous by thelimlight Views - 67 Time Left - 21 Days, 7 Hours 12.00 [Clearance] Snsd in Korean Discography by huixian01 Selling both korean and japanese albums, photob... Views - 34 Time Left - 20 Days, 16 Hours in Fashion by SunkyusMoon This is a replica sweatshirt from the 'Girls Ge... Views - 32 Time Left - 20 Days, 2 Hours in Japanese Discography by iLove Yuri I bought this L holder from my recent trip to S... Views - 89 Time Left - 14 Days, 2 Hours in Japanese Discography by SunkyusMoon Due to my finances I can no longer keep this pi... Views - 175 Time Left - 11 Days, 16 Hours in Posters by soshimor Two unused "The Boys" posters, I'm selling them... Views - 129 Time Left - 8 Days, 18 Hours in Fashion by CatheeFash I buy last year Jessica no.52 from the MV Girls... Views - 132 Time Left - 6 Days, 14 Hours in Korean Discography by syrus As show in the picture below, have most of the...
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447657.38/warc/CC-MAIN-20141017005727-00269-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
951
32
https://www.bimmerfest.com/threads/e63-trying-to-code-lm2-from-lm1-something-went-wrong-several-errors-now.1432117/#post-13692261
code
Hey guys, i really hit a roadblock here & im in need of some assistance please! Now, from what I think I understand now... I can use a blankmans file to run default to LM2 but I am pretty new at coding and I'm still learning the basics so I'm in need of help! Can someone please explain how to do this? I want to code the new LM2 using a blankmans file then run default to LM2? Can someone please explain the steps to do this on NCSExpert then run it on WINKFP, I was missing the location of a file but I've figured that out now on winkfp. I was trying to do this to get the daytime running lights & brighter angel eyes setting. I have found how to do the DTR lights but I haven't found anything on the E63 coding brighter angel eyes with the LM2. Is this even do able on a E63? If anyone can help? Thanks so much guys in advance! Like I said I'm just starting out on coding, only about a week in as of now. Is it also correct that when coding/exporting files from ncsdummy you should only code/export one thing at a time to the psw/trc file? It'll probably be awhile before I'm any good at this.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00537.warc.gz
CC-MAIN-2023-06
1,096
1
http://linuxandfriends.com/installing-virtualbox-guest-additions-in-ubuntu-linux/
code
Installing VirtualBox Guest Additions In Ubuntu Linux VirtualBox Guest Additions are a set of drivers and utilities which are installed always in the guest OS. Once installed, they improve the performance of the guest OS and its cooperation with rest of the product and even the host OS. Advantages of using VirtualBox Guest Additions Once installed in the Guest OS, VirtualBox Guest Additions provide the following extra features. - Seamless mouse integration between the Host OS and the Guest OS. Now you do not have to press the RIGHT CTRL key each time you want to switch the mouse between the host and guest OS. - Resizing the VirtualBox Virtual Machine window will automatically resize the guest OS desktop. - Cut and Paste support between host OS and guest OS. I have created a video tutorial explaining the steps required to install VirtualBox Guest Additions in Ubuntu Linux running as the Guest OS. Watch the following video to learn how it is done. Also check out a video on how to create a virtual machine in VirtualBox.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709947846/warc/CC-MAIN-20130516131227-00090-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,032
9
https://farmtoschoolmanitoba.ca/9-bundle-assembly/
code
Assemble bundles for distribution according to the student & child order forms. It is helpful to have an assembly line for each bundle size starting with the included shopping bags, and then potatoes with the other veggies on top. BUNDLE A: 2lb carrots, 2bl onions, 5lb potatoes BUNDLE B: 3lb carrots, 3lb onions, 10lb potatoes, 1lb parsnips, 1 cabbage ** You will NOT have to weight out the Veggies, vegetables are delivered in pre-weighted bags, volunteers need to sort out veggies into Bundles A or B and bagged within the shopping bags provided (1 re-useable bag per bundle).
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00627.warc.gz
CC-MAIN-2021-10
579
4
http://mrs.flinger.us/blog/blog_permalink/sticky_notes
code
Note: I'm working on getting comments set up again. If it's worth it. Tell me it's worth it? There's some styles that need to be done and some ajax work that makes me want to cut myslef and use vue.js insted but that's yak shaving and really, y'all, as much as I freaking LOVE hearing from you, and I do, believe me, I hated mining the comments from spammers to get to the good stuff. So let me know it's worth it and I'll dig deeper and shave every damn yak for this to work. Currently 60% of the way there. A little push push could make the difference.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219242.93/warc/CC-MAIN-20180822010128-20180822030128-00362.warc.gz
CC-MAIN-2018-34
554
1
https://www.collaborizm.com/thread/E1EUy3pzZ
code
I want to start a project in INDIA in which I could make electricity from garbage which is already happening in several Countries . Rahul, have you tried starting a project and describing your vision in more detail? Various methods are in practice, some cheap some needs huge infrastructure… what are u planning? any example or research work by which u are inspired on this project… im also interested… #Question Rahul fuel, gas and coal they all are use to produce heat.from they rotate turbine. If you want to try Take a tim cylinder and half fill it with dry garbage make file fire on it and see you you can use it in usfull ways
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863100.8/warc/CC-MAIN-20180619154023-20180619174023-00396.warc.gz
CC-MAIN-2018-26
638
5
https://community.airtable.com/t/lookup-fields-with-numbers-became-text-in-sync-table/38716
code
I’m working on two separate base for my organization: the first manages the order, and the second the tracking and payment process. With the sync tables I’m able to get the tables from the order in the second base for the planning and processing of the shipment and payments . But I’m encountering a problem: there are some lookup fields about quantities that I need to process as number on the second base. The synced table show them as text, so the lookup on the tracking table it is also showing them as text, but I can’t modify the format to a number. I’ve also already tried to add another field with a VALUE() formula, but it gives an #ERROR! and doesn’t evaluate the text into a number. Any idea how to proceed?
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00469.warc.gz
CC-MAIN-2021-21
730
4
https://amysimpsongrange.com/2023/05/18/creating-a-keyvault-in-oracle-cloud/
code
Note 1: The details in this post assume that you already have an Oracle Cloud free tier (or upgraded!) account. If you don’t you can get one here. Note 2: Given the regular release schedule present in Oracle Cloud, it is possible that the screens may change somewhat beyond the writing of this post. If this happens, please comment on this post and I will try to help you out. (and update this post!) In this post, I will explain how you can very easily create a KeyVault in Oracle Cloud. Create a KeyVault Step 1: Log into your Oracle Cloud account (free tier is suitable) Step 2: Click on the menu icon (often referred to as “Hamburger” or “Pancake Stack” icon) Step 3: Select “Identity & Security” and then select “Vault” Step 4: You will be presented with the KeyVault management screen. Scroll down and select the compartment in which you would like to create a KeyVault. If you don’t have any compartments, you can take a look at my post, Creating a Compartment in Oracle Cloud. Step 5: Once you have selected your component, you will be presented with the KeyVault management screen specifically for that compartment. Click “Create Vault” Step 6: Provide basic information for the KeyVault creation and click “Create” Step 7: Oracle Cloud will now create the KeyVault. the amber-coloured symbol and state = “Creating” symbolises that the KeyVault is not yet ready to use Step 8: The state will change the green and read “Active” when the KeyVault has been fully provisioned Create a Key Now that we have a KeyVault created as per the above steps, we can create a key within the KeyVault. Step 1: Click on your newly provisioned KeyVault and you will be presented with the KeyVault management page Step 2: Click “Create Key” and enter basic information including the required algorithm and length Step 3: Click “Create Key” and the key will be created and ready to use in Oracle Cloud
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657169.98/warc/CC-MAIN-20230610095459-20230610125459-00383.warc.gz
CC-MAIN-2023-23
1,934
17
https://www.cnet.com/culture/microsoft-gives-firefox-an-h-264-video-boost/
code
Mozilla has shunned the H.264 video technology, but Microsoft is easing its use with Firefox on Windows. WebM video fans might not be pleased. Mozilla is outspoken in its dislike of the patent-encumbered video technology called H.264, but Microsoft, an H.264 fan, is providing a plug-in that will let Windows 7 users use it anyway. H.264 is a codec--technology to encode and decode video--that's widely used in videocameras, Blu-ray players, online video streaming, and more. It's built into Adobe Systems' Flash Player browser plug-in, but most people don't know or need to know it's there. When it comes to the flagship feature of built-in video support coming to the new HTML5 specification for creating Web pages, though, codec details do matter. Not all browsers support H.264 or its open-source, royalty-free rival from Google, the VP8-based WebM. That means Web developers must make sure they support both formats or provide a fallback to something like Flash. Otherwise they risk leaving some viewers behind. To help bridge the divide, Microsoft has released a plug-in that lets Firefox tap into Windows 7's native H.264 support for HTML5 video. The move could help pave over some of the new Web's rough patches, but also irritate WebM fans who want to see the Web move to unencumbered technology. "H.264 is a widely-used industry standard, with broad and strong hardware support. This standardization allows users to easily take what they've recorded on a typical consumer video camera, put it on the Web, and have it play in a web browser on any operating system or device with H.264 support, such as on a PC with Windows 7," Microsoft said. "The HTML5 Extension for Windows Media Player Firefox Plug-in continues to offer our customers value and choice, since those who have Windows 7 and are using Firefox will now be able to watch H.264 content through the plug-in." According to the plug-in's release notes, "The extension is based on a Firefox add-on that parses HTML5 pages and replaces video tags with a call to the Windows Media Player plug-in so that the content can be played in the browser. The add-on replaces video tags only if the video formats specified in the tag are among those supported by Windows Media Player. Tags that contain other video formats are not touched." Microsoft is working on ironing out user-interface differences between Windows Media Player controls and those that would show with video playing natively in the browser. Microsoft already had offered a related Firefox plug-in that let people watch Windows Media videos on the Web. Mozilla is working to try to establish WebM as a required codec for HTML5, a specification standardized by the World Wide Web Consortium (W3C). Updated 8:37 a.m. PT with download link and release note information.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649293.44/warc/CC-MAIN-20230603133129-20230603163129-00006.warc.gz
CC-MAIN-2023-23
2,792
10
https://xpra.org/trac/ticket/1620
code
Xpra: Ticket #1620: rfb server support Implementing a bare bones RFB server turned out to be trivial since the protocol is so simple (unlike xpra, no windows or metadata to deal with!): rfc6143. This may be useful for comparing with VNC clients, debugging the vfb state, etc TODO before the next release (minimum required for secure usage): - bug: screen updates are always one frame behind - authentication step (support at least - move code to an rfb support module and also support rfb in shadow servers - honour server sharing options, fix source missing attributes (uuid, etc) - verify control channel commands, etc (anything that might dereference missing source attributes) Extras (probably for a later milestone): - SSL support (should be trivial) - refactor rfb protocol code into a common superclass with regular xpra protocol (packet accounting, threads, close, etc) - more encodings than just plain RGB (ie: copyrect for scrolling, jpeg) - clipboard support - support colormap modes? (8 / 16 bit) - desktop-size pseudo encoding (randr like) - maybe add a default port for bind-rfb (5900 + DISPLAY) - support older protocols To use it: xpra start-desktop :100 --start=xterm --bind-rfb=0.0.0.0:5900 -d rfb vncviewer 127.0.0.1:0 -Log "*:stderr:100" Wed, 09 Aug 2017 16:37:12 GMT - Antoine Martin: status changed changed from new to assigned Code added in r16673, see also #639. Sun, 27 Aug 2017 04:46:53 GMT - Antoine Martin: Updates linked to UDP (#639): - r16710 + r16713: authentication modules refactoring - preparation - r16712: large rfb update (see commit message) Wed, 06 Sep 2017 15:50:49 GMT - Antoine Martin: owner, status changed changed from Antoine Martin to J. Max Mena changed from assigned to new Should be fully usable as of r16784 (see commit message). This is useful for testing, but will need work before being able to compete with other RFB servers (#1632). @maxmylyn: this is just a FYI, feel free to close. It does support authentication and sharing options, ie: echo -n 01234567 > password.txt xpra start-desktop :100 --start=xterm --sharing=yes \ --bind-rfb=0.0.0.0: --rfb-auth=file,filename=./password.txt --no-daemon You can then connect with vncviewer: echo -n 01234567 | vncpasswd -f > password.vnc vncviewer 127.0.0.1:0 -Log "*:stderr:100" -Shared=1 -passwd=./password.vnc And since sharing is enabled, you can connect an xpra client simultaneously: xpra attach tcp:localhost Wed, 06 Sep 2017 20:59:59 GMT - J. Max Mena: status changed; resolution set changed from new to closed set to fixed Noted and closing. Thu, 07 Sep 2017 09:47:54 GMT - Antoine Martin: As usual, this required a bunch of platform tweaks and fixes: - better keycode support: r16785, with macos and win32 servers support: r16789 - shadow servers needed to be told to start refreshing: r16786 - handle win32 sockets timeouts: r16787 - mouse button handler method signature requires position on win32: r16788 - platform import fixes, macos doesn't have X11: r16790, r16791 - macos shadow server fixes: r16792, r16794 - TCP sockets can be "upgraded" to RFB after a timeout: r16836 It would be nice to add support for Thu, 18 Jun 2020 09:02:50 GMT - Antoine Martin: Regression in v4: #2811. Sat, 23 Jan 2021 05:29:18 GMT - migration script: this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1620
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00633.warc.gz
CC-MAIN-2022-33
3,323
61
http://mapopa.blogspot.com/2010/08/new-to-ubuntu-linux-start-with-guides.html
code
Start with ubuntu manual or the official ubuntu docs or debian docs If the manual is not enough then there a nice shell guide where they learn you ancient unix commands in the terminal What is next ? maybe some programming languages books like python or sql books if you are inclined to learn something new. The next level would be drifting through the Linux manpages , this is what i did in my journey At the end is nirvana learning the kernel internals and the source code (it's English slang and simple c) and can be tweaked at will and anyone can send patches upstream ps: never ask why flash is slow on Windows , Linux , Mac : adobe bug she hates us "She's always hungry. She always needs to feed. She must eat."
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.71/warc/CC-MAIN-20170423031202-00526-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
717
6
https://transitionwhatcom.ning.com/profiles/blogs/swale-making-preparations-using-the-a-frame-to-map-contour-lines
code
Using the map of the elevation lines on the property to roughly gauge where the contours are, David used the A frame to plot the course of contour lines across the property where we plan to carve the swales. Using this map with elevation contour lines: Using a level and a plumb line to find the level spots at this contour line: Back in the previous post there were some videos showing how this is done...see: And it always helps to have a nice kitty doing rollie-zollies in the grass nearby.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510284.49/warc/CC-MAIN-20230927071345-20230927101345-00505.warc.gz
CC-MAIN-2023-40
493
5
https://link.springer.com/chapter/10.1007%2F978-3-642-39071-5_15
code
Experiments with Reduction Finding - Cite this paper as: - Jordan C., Kaiser Ł. (2013) Experiments with Reduction Finding. In: Järvisalo M., Van Gelder A. (eds) Theory and Applications of Satisfiability Testing – SAT 2013. SAT 2013. Lecture Notes in Computer Science, vol 7962. Springer, Berlin, Heidelberg Reductions are perhaps the most useful tool in complexity theory and, naturally, it is in general undecidable to determine whether a reduction exists between two given decision problems. However, asking for a reduction on inputs of bounded size is essentially a \(\Sigma^p_2\) problem and can in principle be solved by ASP, QBF, or by iterated calls to SAT solvers. We describe our experiences developing and benchmarking automatic reduction finders. We created a dedicated reduction finder that does counter-example guided abstraction refinement by iteratively calling either a SAT solver or BDD package. We benchmark its performance with different SAT solvers and report the tradeoffs between the SAT and BDD approaches. Further, we compare this reduction finder with the direct approach using a number of QBF and ASP solvers. We describe the tradeoffs between the QBF and ASP approaches and show which solvers perform best on our \(\Sigma^p_2\) instances. It turns out that even state-of-the-art solvers leave a large room for improvement on problems of this kind. We thus provide our instances as a benchmark for future work on \(\Sigma^p_2\) solvers. Unable to display preview. Download preview PDF.
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322873.10/warc/CC-MAIN-20170628065139-20170628085139-00321.warc.gz
CC-MAIN-2017-26
1,515
5
https://share.sis.org.cn/21hk02/2014/07/
code
This year for my summer vacation my father planed to let me go to Atlanta. Well, I everyday told to my father I wanted to go so I went to Atlanta. First I had a UM Service which is that the studious help me in the airplane for like 14 hours. When I arrived, to the Atlanta airport I meet some kids and parents that I am going to stay with. It was fun to meet them. In Atlanta, I will go to Day Camp, Basketball Camp, and other good camps. Lastly, I miss my teachers, friends, and my family but of the time some people get which is sad for me. Anyway, I am having a good time in Atlanta.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00283.warc.gz
CC-MAIN-2022-05
586
4
http://sexualmisconductresources.emory.edu/what_to_know/index.html
code
Obtaining Information, Assistance and Support; Reporting Options If you have experienced sexual or gender-based violence or harassment, there are multiple channels for obtaining information, assistance, and support to ensure your health and safety, both physical and emotional. There are also a number of ways to report the incident should you desire to do so. The resource sheet linked in the navigation menu to your left is intended to provide an overview of your options. The term “sexual or gender-based violence” is used here as an umbrella term to refer to all prohibited conduct as defined in Emory University’s Equal Opportunity and Discriminatory Harassment Policy 1.3, http://policies.emory.edu/1.3 and Sexual Misconduct Policy 8.2, http://policies.emory.edu/8.2. Prohibited conduct includes sexual assault (including non-consensual sexual contact or non-consensual sexual intercourse, or attempts to commit either), sexual exploitation, dating violence, domestic violence, intimate partner violence, stalking, gender-based harassment, and retaliation against any person for making a good faith report of prohibited conduct or for participating in any proceedings under these policies. Detailed definitions of these and other key terms, including “consent” and “incapacitation” are set forth in the policies.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00234-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,332
3
https://stackoverflow.com/questions/797190/file-uploading-in-ajax-updatepanel-without-full-postback
code
I have a update panel, in the update panel I have fileupload control and button control, On button click, I need the file that I have upload in the fileupload control in updatepanel. Exact scenario, I have 8 tabs on page, each tab contains too much information, One of the tab is Attachment, when user click on Add New Attachment Modal Popup shown, Modal contains detailsview in Updatepanel and in the detailsview I have fileupload control, when user hit save button, detailsview inserting event fired, In the inserting event I need the file that I have upload. Please Note, My page is heavy and I don't want full postBack. Does anyone have solution of this issue? Advance thanks for your kind help.....
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00532.warc.gz
CC-MAIN-2019-30
703
5
https://gitlab.mn.tu-dresden.de/paraphase/dune-fracture-phasefields/-/merge_requests/20
code
Currently, it is not possible to use range-based for to loop over the entries of a BCRSMatrix object. That is because the iterator provided by the matrix row has a method 'index', which returns the current matrix column, and this method is not accessible from a range-based for loop. As a consequence, normal iterator loops have to to be used, which make the code clumsy. Now C++17 has brought structured binding, which offers a very elegant way around the problem. There is a proposal in dune-common for a global method 'sparseRange', which turns an iterator range with a custom iterator. This iterator, when dereferenced, returns std::pair<reference,int>. Hence it is possible to write for (auto [entry, i] : sparseRange(matrixRow)) std::cout << "column " << j << " contains " << entry << std::endl; in lieu of auto it = matrixRow.begin() auto endIt = matrixRow.end() for (; it!=endIt; ++it) std::cout << "column " << it.index() << " contains " << *it << std::endl; However, besides the use of the 'index' method there is nothing matrix-specific or dune-istl-specific here.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00076.warc.gz
CC-MAIN-2023-50
1,075
11
https://skillsmatter.com/skillscasts/6547-high-performance-programming-in-haskell
code
Please log in to watch this conference skillscast. In this talk, we'll take a deep dive into how to write high performance Haskell code, using what we've learned while optimizing the core Haskell libraries. We'll focus on understanding the memory layout of Haskell data types and how it can be optimized to make your program run faster. I'll give you several "rules of thumb" for writing code that performs well from the start, rather than having to be patched up once performance issues arise. This talk complements Bryan O'Sullivan's 2014 talk on Performance Measurement and Optimization in Haskell, by focusing more on actual optimisations, rather than measuring performance. Join us at the Haskell eXchange in 2016! Want to learn about the latest innovations in Haskell? Join 200+ Haskell and functional programmers to learn and share skills with some of the world's top Haskell experts at the Haskell eXchange 2016 in London. Find out all about Haskell's infrastructure roadmap, learn how Haskell is used in academia and enterprise and discover how Haskell is changing the way our industry tackles complex engineering problems. Early bird tickets already available! YOU MAY ALSO LIKE: - Functional Concurrency in .NET with C# and F# (in London on 9th - 10th September 2019) - Fast Track to F# with Tomas Petricek & Phil Trelford (in London on 9th - 10th September 2019) - Haskell eXchange 2019 (in London on 10th - 11th October 2019) - Hands-on: Fractal art with Fable and WebGL (in London on 20th June 2019) - London Clojure July: Exploring REPL tooling with socket prepl (in London on 2nd July 2019) - Scala 2.13 and Beyond! (SkillsCast recorded in April 2019) - Introduction to Markov Chains in F# (SkillsCast recorded in April 2019) High performance programming in Haskell Johan Tibell is a Googler and a long time contributor and maintainer of some of the core Haskell libraries, including the most popular data structure and networking libraries. Johan has worked on GHC's threading implementation for scalable I/O, modern hashing-based data structures, and the high-performance Python protocol buffer implementation used inside Google.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998339.2/warc/CC-MAIN-20190617002911-20190617024911-00067.warc.gz
CC-MAIN-2019-26
2,147
15
http://eprm.ardent-tool.com/epr3e/16361.htm
code
DLT7000 Tape Drive Diagnostic and Fix Verify Selections The following DLT70OO tape drive read/write test is available in the DIAG OFFLINE mode using the tape library control keys. Before using this selection, all of the target devices on the SCSI bus (tape library and DLT7000 tape drives) must be finished. -Attention- You must be extremely careful to use only the CE cartridge in this test. Using a customer cartridge will overwrite customer data.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00464.warc.gz
CC-MAIN-2022-05
449
5
https://security.snyk.io
code
We’ve disclosed 2270 vulnerabilities by Snyk Security How to fix? org.springframework:spring-beans to version 5.2.20, 5.3.18 or higher. utility-common-v2 is a malicious package. The package's name is based on existing repositories, namespaces, or components used by popular companies in an effort to trick employees into downloading it, also known as 'dependency confusion'. Therefore, you're only vulnerable if this package was installed from the public NPM registry rather than your private registry. Note: This malicious package was uncovered by one of Snyk's automated algorithms, and was confirmed to contain malicious code by our Security Research Team. For more context, please visit our blogpost. Affected versions of this package are vulnerable to Denial of Service (DoS) when using internationalized URLs, due to locale parameter being interpreted as regular expression. Affected versions of this package are vulnerable to Denial of Service (DoS) via the parsing procedure for binary and text format data. Input streams containing multiple instances of non-repeated embedded messages with repeated or unknown fields cause objects to be converted back and forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer's toolkit.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00009.warc.gz
CC-MAIN-2022-40
1,659
10
http://ourchords.xyz/archives/5290
code
Jam-upnovel Nanomancer Reborn – I’ve Become A Snow Girl? novel – Chapter 1064 Jester different instruct reading-p2 Novel–Nanomancer Reborn – I’ve Become A Snow Girl?–Nanomancer Reborn – I’ve Become A Snow Girl? is an astronomy major worth it Chapter 1064 Jester passenger pop The Invisible Lodge “I’m not saying we will need to make her an ally since we might turn out to be adversaries in the foreseeable future. But her aid could well be valued. I’m making the final final decision to you because this is something relates to you. In the event you don’t wish to be her ally then we won’t.” american indian stories the soft-hearted sioux In addition, the spell comes with a high probability of failing on those that have roughly the same toughness when you. Resupplying her products with liquid, she closed up the portal. Coming before Estrella’s doorway, s.h.i.+ro needed to knock onto it when she realised anything significant. top quality beauty cultivation system boxnovel “Frustrating.” She muttered. “For those who demand.” “My devotion is in your direction. Whilst the prior is essential, I’d somewhat guard one that’s life and respiratory facing me.” “You don’t have to, you understand? You are able to handle it if you get out of bed.” s.h.i.+ro elevated an eyebrow. “Should you insist.” For lots more, stop by lightnovelpub[.]com “Certain.” Coming into the space, s.h.i.+ro glanced all over and can identify that Estrella was putting together to obtain a new spell. “I understand hence why I’m setting up an item that can clear up this. I give attention to s.p.a.ce miraculous so there are actually loopholes which can be proved helpful all around. The greatest disadvantage in Banishment spells is the spell has to operate two work opportunities simultaneously. Pressuring them inside a unique aspect and closing them. To me, with my skills in s.p.a.ce miracle, that step one is easy and so i don’t need to be concerned. I can primary almost all of the magical to sealing them as an alternative.” Estrella spelled out as s.h.i.+ro thought of it for a moment. “I’m sorry… But my priorities are s.h.i.+ro…” She muttered while examining the crest on the Dragon Empress in the extended distance. “Sup? Have a little something occur?” s.h.i.+ro requested. She acquired left Nan Tian behind since she figured that the chat should take below 10-20 minutes. She didn’t anticipate to secure a call not very a long time after she eventually left the space. Phoning Nan Tian, s.h.i.+ro searched into the area from the window. Taking hold of the one in between, a rune sprang out on the rear of s.h.i.+ro’s palm as she examined for your details she necessary. “Mn I did. We’ll discuss with Syradil down the road however, not but. For the present time, I wish for you to have all of the data and video clip from the incident with the 3 traitors. Mail their corpses with me as well, I’ll talk with their souls. There needs to be some traces eventually left.” s.h.i.+ro narrowed her vision as Nan Tian nodded his go. The best updated books are publicized on lightnovelpub[.]com “Very well I found myself looking at over a couple of things as you hosted a raid this evening. Want to are offered in?” Estrella inquired as s.h.i.+ro contemplated for a moment just before nodding her go. The source for this content is lightnovelpub[.]com Wandering towards Estrella’s area, s.h.i.+ro gained a phone call from Nan Tian. “Sure.” Entering the bedroom, s.h.i.+ro glanced around and might note that Estrella was creating for the new spell. Finishing the phone call, s.h.i.+ro achieved into her inventory and sought to get a mug of juices. dinner at the homesick restaurant quotes “Mn? Positive, are you wanting a chair?” Estrella provided as s.h.i.+ro nodded her go. “Certainly I actually, How can I not know as it was completed in front of me.” Estrella responded as s.h.i.+ro nodded her go. Clenching her fists, she closed down her eye and recollected the arena which had haunted her. It was the reason why she left behind the nature authorities from the beginning. Her inability to safeguard the Princess even though she was there. “My loyalty is in your direction. While past is important, I’d rather safeguard one that’s dwelling and inhaling and exhaling in front of me.” “Perfectly I became verifying over a few points because you sponsored a raid today. Prefer to may be found in?” Estrella requested as s.h.i.+ro contemplated for just a moment just before nodding her travel. “I’ve grabbed every one of the doc.u.ments related to what they’ve been around in their whole relax in Asharia. Their tasks, practices for example.” Nan Tian reported as s.h.i.+ro nodded her top of your head. “It’s good, I’m awaiting you anyways. I’ll just contend with it now.” Ruth Fielding At Sunrise Farm Just before the final soul could explode, s.h.i.+ro narrowed her view. Contemplating this, s.h.i.+ro furrowed her brows and made close to. “Surprise~ It’s me, your older friend. They’ve already closed a binding agreement with me so as their professional, I actually have to ensure their level of privacy. Bye~” The jester waved in an overstated action well before disappearing. A green light came out in their vision as she could see faint trails with their souls. Stick to up-to-date books on lightnovelpub[.]com For further, visit lightnovelpub[.]com “That seems very difficult. Aiming to close someone tougher than you in another dimension is tough. And people weaker than you can just be killed.” s.h.i.+ro mentioned as hardly any bother with banishment spells for that reason purpose. “Ah, Estrella, I figured you were asleep.” s.h.i.+ro increased an eyebrow as she behaved as though she hadn’t been loitering here for a second. The Marne, 1914 “Hold out, anyone had been able finish the protection?… Hmm… this is usually a concern. I ought to have filtered them out though… Improve our filtering process. Transform it into a minimal more difficult to acquire people.h.i.+p right here so that they have to prove they’re no longer working for another Queen.” s.h.i.+ro explained as Nan Tian nodded. Abide by up-to-date books on lightnovelpub[.]com
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00736.warc.gz
CC-MAIN-2023-14
6,307
50
http://www.linuxquestions.org/questions/linux-newbie-8/video-card-on-ubuntu-4175432212/
code
Originally Posted by moskis8 I have a nvidia video card, but I do not know which model I can not find and I have just installed ubuntu os but unfortunately there is no video card driver where or how do I corrected this problem Manual NVIDIA binary driver package install Install the NVIDIA binary driver package manually. It will automatically blacklist the Nouveau default driver and make the kernel modules for you: sudo apt-get install nvidia-current Then simply modify your "xorg.conf" to use it. Copy and paste the whole code snippet in the terminal where you ran "sudo -i": echo 'Section "Screen" Identifier "Default Screen" Identifier "Default Device" Option "NoLogo" "True" ' > /etc/X11/xorg.conf Then reboot your machine. In case it doesn't work, you can get back to the default state by reverting the changes: sudo rm /etc/X11/xorg.conf && sudo apt-get purge nvidia-current nvidia-settings And of course then you need to reboot.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00274-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
938
15
https://community.qlik.com/t5/QlikView-App-Dev/INLFED/m-p/868262/thread-id/994889/highlight/true?attachment-id=106793
code
Discussion Board for collaboration related to QlikView App Development. Please find the attachment in which u can see two tables Hotel Property and INLFED My problem is that Iam not able to see The INFLED in the script ? Where is it ? can Any One Help me with this How do i resolve this ? Thanks for the reply Alessandro . But iam not understanding where do i find it ? Because i need to make some changes in it , But i searched in the scrip but could not find it . WIll u please tell me where can i find this INL FED table ? In your case, search the "Name" fieldname in Find What option by using "Ctrl+F". And check the "Search all tabs" checkbox. You can find that table. or Search the inline in "Find What option" INLFED is a default name that qlik gives to a table loaded with the instruction Load * inline [ Look for this piece of script, if you do not find send me your script. You won't find the text INFLED anywhere in your load script. The name is created by Qlikview because you did not provide an explicit name for an inline load. Search for INLINE in your script.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153391.5/warc/CC-MAIN-20210727103626-20210727133626-00407.warc.gz
CC-MAIN-2021-31
1,075
15
http://www.celiac.com/gluten-free/topic/104310-no-shampoo/page-3
code
So, update on this. I did the baking soda/vinegar thing for about two months. At first, it was great! My hair was healthy and shiny and I loved it. But as time went on, it got clumpy and greasy and nothing seemed to help. I REALLY wanted no shampoo to work for me, but it just didn't. I tried homemade stuff with castile soap, but that left my hair a grease ball as well, so I'm back to organic, natural shampoo. Sigh. Posted 13 January 2014 - 02:30 PM Number 3 clipper on the top and a number 2 on the sides and back. Ivory soap does a great job !! Officially diagnosed 12/24/13 (Merry Christmas to me) The greatest truths are the simplest; and so are the greatest men. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990609.0/warc/CC-MAIN-20150728002310-00320-ip-10-236-191-2.ec2.internal.warc.gz
CC-MAIN-2015-32
742
8
https://www.sdxcentral.com/products/dell-networking-os10/
code
Dell EMC OS10 About Dell EMC OS10 Dell EMC OS10 combines the best of Linux, open computing and networking to advance open networking disaggregation. Hardware abstraction through common APIs Dell EMC OS10 is a transformational software platform that provides networking hardware abstraction through a common set of APIs so you can: Consistency across resources Enable consistency across compute and network resources for your system operators (SysOps) groups that require server-like manageability. Dell EMC OS10 helps you: A simple transition to open networking Easily leverage your existing network configuration. Dell EMC OS10 incorporates traditional networking integration so you can: Enhance the integration and control you allow your development and operations (DevOps) teams, down to identifying an object as an individual, manageable entity within the platform: Use of the SDxCentral service directory is governed by our Terms of Service, including without limitation those sections under the headings "CONTENT", "LICENSING AND OTHER TERMS APPLYING TO CONTENT POSTED ON THE SDXCENTRAL SITES", "INDEMNITY; DISCLAIMER; LIMITATION OF LIABILITY" AND "COPYRIGHTS". Under no circumstances will SDxCentral be liable in any way for any Content, including, but not limited to, liability for any errors or omissions in any Content or for any loss or damage of any kind incurred as a result of the use of any Content posted, emailed or otherwise transmitted via the Sites.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584547882.77/warc/CC-MAIN-20190124121622-20190124143622-00539.warc.gz
CC-MAIN-2019-04
1,469
11
https://jalbum.net/forum/thread.jspa?messageID=342296&tstart=0
code
The user may be building a project with many branches, and wants to set up the structure before populating it with images. He'll just be puzzled about why he's not seeing the structure he's created. Hiding empty directories falls into the category of doing the user a "favor" that he didn't ask for. If he wants to have empty directories ignored, the core already gives him a way to do that - he can hide or exclude them with a click. Right. That's exactly what I was looking for. Thanks! You too Rob of course but the above is I think key.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00412.warc.gz
CC-MAIN-2020-34
540
4
https://reflectionit.nl/blog/2003/reliably-and-quickly-improve-your-c-code-with-xtreme-simplicity-s-c-refactory
code
I have found a promising C# tool which fully integrates with Visual Studio.NET. I'm going to test it soon (I hope). Have a look yourself at http://www.xtreme-simplicity.net All postings/content on this blog are provided "AS IS" with no warranties, and confer no rights. All entries in this blog are my opinion and don't necessarily reflect the opinion of my employer or sponsors. The content on this site is licensed under a Creative Commons Attribution By license.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00197.warc.gz
CC-MAIN-2022-49
465
3
https://deepai.org/publication/an-efficient-algorithm-to-test-potentially-bipartiteness-of-graphical-degree-sequences
code
Given an arbitrary graphical degree sequence , let denote the set of all of its non-isomorphic realizations. As usual, let and denote the chromatic number and clique number of a finite simple undirected graph respectively. It is known from Punnim Punnim2002A that for any given the set is exactly a set of integers in some interval. Define to be and to be . These two quantities can be interesting for the structural properties of all the graphs in . It appears computationally intractable to compute for any given zero-free . In this paper we are concerned with the related, somewhat easier, decision problem of whether . Clearly, this is equivalent to decide whether has a bipartite realization, which is actually the first listed unsolved problem in Rao Rao1981 to characterize potentially bipartite graphical degree sequences and which remains unsolved to our knowledge. Note that the input is a single sequence of vertex degrees. A related problem is to decide, given two sequences of positive integers , where and and , whether there is a bipartite graph whose two partite sets have and as their respective degree sequences. This problem can be easily solved by applying the Gale-Ryser theorem Gale1957 ; Ryser1957 , which states that the answer is “yes” if and only if the conjugate of dominates (or, equivalently, the conjugate of dominates ). Here we use the common definition of domination between two partitions of the same integer: a partition dominates a partition if for each . By convention, for , where denotes the number of parts in the partition . We also use to denote the weight of the partition , that is, the sum of all the parts of . The rest of the paper is organized as follows. Section 2 describes the algorithm to decide whether a given has a bipartite realization. Section 3 gives a time complexity analysis of the algorithm. Section 4 presents some experimental results. Section 5 discusses alternative designs of the algorithm and comments on the complexity of the decision problem. Section 6 concludes with further research directions. 2 Description of the Algorithm Clearly, to decide whether any zero-free graphical degree sequence with weight has a bipartite realization, we first need to determine whether it has a bipartition into and of equal weights (for convenience, we call such bipartitions of candidate bipartitions). One may feel it challenging to find a candidate bipartition of in the first place, because it looks exactly like the well-known subset sum problem, which is known to be NP-complete GareyJohnson1979 . Fortunately, since every term in of length is less than , this restricted subset sum problem can be solved easily through dynamic programming in polynomial time GareyJohnson1979 ; Koiliaris2019 . In fact, many inputs admit a large number of candidate bipartitions. Now we can see that the decision problem boils down to checking whether has at least one candidate bipartition and, if this is the case, whether any of those candidate bipartitions satisfies the Gale-Ryser condition. A naive algorithm can simply enumerate all candidate bipartitions of and check each of them against the Gale-Ryser condition. Such an algorithm necessarily runs in exponential time in the worst case. Our algorithm is more sophisticated than that. It has two phases. The first phase utilizes up to seven rules that can all be easily checked. As a matter of fact, in Section 4 we will show that most of the inputs can be resolved by this phase alone. The second is the enumeration phase, in which we do “brute-force” search in a clever way. In describing and justifying the seven rules in the first phase, we seek a candidate bipartition of into the left side and the right side in such a way that at least half of the largest terms in appear in , without loss of generality. For example, for any input of length 50 with the largest term 34 whose multiplicity is 5 (i.e. there are exactly 5 copies of 34 in ), we will seek a candidate bipartition such that the left side contains at least 3 copies of 34. If does not have a candidate bipartition, then it is not potentially bipartite. This rule is obvious. As mentioned above, this rule can be easily implemented through dynamic programming for the subset sum problem. ∎ If , then is not potentially bipartite. Based on Mantel’s theorem Mantel1907 , any simple undirected bipartite graph on vertices has at most edges. So the degree sum cannot exceed for any that is potentially bipartite. ∎ If , then is not potentially bipartite. Suppose is potentially bipartite. The left partite set contains a vertex of degree so the right partite set contains at least vertices ( neighbors), each of which has a degree at most since the left partite set has at most vertices. Consequently, must contain at least degrees that are . Therefore, must be for to be potentially bipartite. ∎ If , then is not potentially bipartite. As mentioned in the proof of Rule 3, the left partite set has at most vertices. Clearly, the degree sum of the left side is impossible to exceed . Therefore, must be at least for to be potentially bipartite. ∎ If , then is not potentially bipartite. As shown in the proof of Rule 3, each of the right side degrees in is at most . Therefore, every degree larger than must be in the left side and the sum of such degrees should not exceed for any that is potentially bipartite. ∎ For the following rule, we will need the concept of residue of a finite simple undirected graph or a graphical degree sequence introduced in Favaron et al. Favaron1991 and we use and as notations. We also use to denote the complementary graphical degree sequence of : , which is the degree sequence of the complementary graph of any realization of . If , then is not potentially bipartite. As proved in Favaron1991 , the residue of a graphical degree sequence is a lower bound on the independence number of any realization of . Then clearly is a lower bound on the clique number of any realization of . The result follows because any graph with a clique of size at least 3 is not bipartite. ∎ The following is a similar rule that uses the concept of Murphy’s bound introduced in Murphy MURPHY1991 , denoted or here, which is also a lower bound on the independence number of any realization of . If , then is not potentially bipartite. If the input passes the tests of all of the above seven rules and cannot be resolved as a “no” instance, then our algorithm will enter the enumeration phase. From Rule 5 we know that by now we must have . In the special case that equality holds, which means the left side must contain exactly those degrees that are larger than should be potentially bipartite, our algorithm can immediately stop based on the result of the Galy-Ryser conditional test on this candidate bipartition of . Otherwise, our algorithm continues with , which is the sum of the additional degrees that need to be in the left side besides those that are larger than . For convenience, we use to denote the subsequence of consisting of those degrees that are larger than . Note that is an empty sequence when . The second phase will then enumerate candidate bipartitions of into by specifying which degrees will be in the left side , which also automatically specifies . As we already know, we need to choose a subsequence of (i.e. from those degrees in that are at most ) with sum and concatenate with this subsequence of degrees to form based on the above discussion. Several restrictions regarding can be put on the left side for the candidate bipartitions to possibly satisfy the Galy-Ryser conditional test so that our algorithm will enumerate as few candidate bipartitions as possible. The number of degrees in the left side cannot exceed . This is because the right side contains at least degrees. Let be the maximum number of degrees in with sum at most . Then the number of degrees in the left side cannot exceed . This is because the degrees in the left side must have sum . Let be the minimum largest degree in any subsequence of with sum at least . Then the number of degrees in the left side must be at least . This is because the largest degree in the right side must be at least and the conjugate of should dominate . Let be the minimum number of degrees in with sum at least . Then the number of degrees in the left side must be at least . The reason is similar to that for Restriction 2. It’s not hard to see that , and can all be easily calculated with greedy algorithms. The above discussion shows we can enumerate all subsequences of that satisfies the following three requirements: it includes all degrees in (i.e. those degrees in that are greater than ). it has sum . its number of degrees should satisfy . In order to find a successful (i.e. satisfying the Gale-Ryser condition) candidate bipartition of , our intuition is to include a suitable number of large degrees from and as many small degrees of as possible into without violating requirement 3 mentioned above. In this way will not include many of the largest degrees in while will still include enough number of degrees, which makes it more likely for the conjugate of to dominate . Following this intuition we calculate a maximum index such that cannot include all in order for its conjugate to dominate . This index can be easily calculated as follows. Starting from , if for some , when we include all in and include from as many smallest degrees as possible into while still maintaining the correct sum , and when the number of degrees in starts to fall below , then can be chosen to be . After has been calculated, we will try to find out if we can include a subsequence of into together with some degrees in such that the conjugate of dominates . Without loss of generality, this subsequence can be chosen to be the largest terms of . Or, equivalently, we can remove the smallest terms from one at a time to get these subsequences. For each such subsequence , where since necessarily includes all degrees in according to the above discussion, we perform the following two enumerative steps to fully construct : starting from the largest possible, choose some degree from and include some copies of into . We also stipulate that no degree larger than from will be included into . Here is defined as follows. If includes all copies of from , then includes together with all copies of the degree from which is immediately smaller than . If does not include all copies of from , then includes together with all the remaining copies of from . The motivation for such a definition is that we don’t want to equal a degree we have just excluded from a previous consideration of when is being reduced starting from . include some small terms that are all less than from into , where is the subsequence of consisting of all copies of . We can generate a number of possible combinations of small terms with each combination summing to a suitable value based on the choice of and the choice in the enumerative step (1) and having a suitable number of terms so that satisfies the inequality in the above requirement 3. An appropriate procedure can be designed for this purpose such that combinations with more smaller terms are generated first and each combination can be generated in time. Note that both of these steps are enumerative steps. Step (1) must be exhaustive by trying each possible distinct from and each of the possible number of copies up to its multiplicity in . Step (2) can be non-exhaustive, which means we can impose a limit on the number of possible combinations of small terms to be included into . This parameter is the place where our algorithm is customizable and in reality we can choose to be a constant or a low degree polynomial of . This non-exhaustive enumeration step does open the possibility of our algorithm making an error on some “yes” input instances if the specified limit will cause our algorithm to skip some of the possible combinations. However, this step will not introduce any error on “no” input instances. We also note that some of the choices in these two steps can be pruned during the enumerative process to speed up the enumeration phase when they will cause to fail to satisfy the inequality in the above requirement 3. In fact, the lower bound on can be improved during the process as is being reduced so that the minimum largest degree in increases. The reader may have noticed that these enumerative steps are more sophisticated and complicated than the simple naive scheme of enumerating all possible subsequences of with sum . We will discuss several alternative enumeration schemes later in Section 5. The presented enumeration scheme here is the fastest we found through experiments. During the enumeration phase, the algorithm will stop and output “yes” if a successful candidate bipartition is found. Otherwise, it will stop enumeration and output “no” when the subsequence becomes shorter than , or, in the case that is empty, when includes less than half of the largest degrees from . We note that the enumeration phase can be easily parallelized with respect to the different choices of . However, it may not be worth it given the good run time performance of the serial version unless the input is long and hard (say ). See the following sections for run time complexity analysis and experimental evaluations. 3 Analysis of Run Time Complexity The seven rules in the first phase can all be checked in polynomial time. It can be easily verified that the total running time of these rules is . In the second phase, the three quantities , and can all be computed in time. The maximum index can be calculated in time. The number of choices for is . For each choice of , the number of choices for and its number of copies to be included in in the enumerative step (1) is . The maximum number of combinations of the remaining small terms to be included in in the enumerative step (2) can be chosen to be , , etc. Each combination can be generated in time. Whenever a full left side has been constructed, the Galy-Ryser conditional test on the candidate bipartition can be performed in time. Overall, we can see that the second phase runs in time when is . Note this run time is achieved at the expense of the algorithm possibly producing an erroneous output on some “yes” instances. However, the observed error rate is so low that we consider the limit on worthwhile. On the other hand, if no limit is placed on , then our algorithm will always produce a correct output, at the expense of possibly running in exponential time in the worst case. In summary, our algorithm can be customized to run in polynomial time with satisfactory low error rates (see Section 4 for some evidence of error rates). Also note that it is a deterministic instead of a randomized algorithm. We mainly tested our implementation of the decision algorithm with the parameter customized as . We first show the low error rates of the algorithm and then show the good run time performance. 4.1 Error Rates We first demonstrate the somewhat surprising power of the seven rules in the first phase. In Table 1 we show the number of all zero-free graphical degree sequences of length that can be resolved by one of these rules and their proportion among all zero-free graphical degree sequences of length . Based on the description of the rules, these instances are all “no” instances. The function values are obtained through a program that incorporates our decision algorithm into the algorithm to enumerate all degree sequences of a certain length from Ruskey et al. Ruskey1994 . Let be the number of zero-free potentially bipartite graphical degree sequences of length . Clearly since some of the “no” instances are resolved in the second phase. It looks safe to conclude from this table that tends to 1 as grows towards infinity and so tends to 0. Note that these are just empirical observations. Rigorous proofs of the asymptotic orders of these functions or their relative orders might require advanced techniques Wang2019 . In fact, those instances that can be resolved by one of the seven rules are not the only ones that can avoid the enumeration phase of our algorithm. For example, those instances that have can also be resolved immediately following the tests of the rules according to our description in Section 2. Next we demonstrate the low error rates of our algorithm. In Table 2 we show the number of all zero-free potentially bipartite graphical degree sequences of length that will be incorrectly reported as a “no” instance if we set and their proportion among all zero-free potentially bipartite graphical degree sequences of length . Even with the smallest possible , our algorithm makes very few errors on the “yes” instances. In fact, if we set , then our algorithm makes no error on all zero-free graphical degree sequences of length . However, the observed trend is that the limit need to grow with for our algorithm to always make no error. We are unable to prove whether there is any polynomial of to bound such that our algorithm can always give correct outputs or the error rate is always below some constant. If grows faster than a polynomial of , then our algorithm could run more than polynomial time in the worst case. In our experiments we did not find any “yes” instance of length that will be misclassified by our algorithm under the setting of . We note that the error rates reported in Table 2 is with respect to the “yes” instances. The error rate will be much lower if they are computed with respect to all instances of length because, as we know from Table 1, by far the majority of the “no” instances have already been correctly detected by the seven rules. For example, at the setting of . Plus, increasing from to also further reduces the error rate. For example, at the setting of . 4.2 Run Time Performance We now demonstrate the run time performance of our algorithm with the setting of . Here the reported run times were obtained through a C++ implementation tested under typical Linux workstations. We have already shown in Section 3 that our algorithm runs in polynomial time if is bounded by a polynomial of . We generated random graphical degree sequences of specified length , largest term and smallest term . For a wide range of , we found that the hardest instances for our algorithm are approximately in the range of and . The instances in these ranges are the most likely to cause our algorithm to enter the enumeration phase. However, even the hardest instances we tested for can be finished in about a couple of minutes, which are necessarily those “no” instances that will go through the entire enumeration phase without any successful candidate bipartition being found. All the tested instances that are decided in the first phase can be finished almost instantly. All of the tested “yes” instances detected in the enumeration phase can be decided in at most tens of seconds due to the empirical fact that most of the “yes” instances have a successful candidate bipartition that can be found even when is set to 1. We mentioned in Section 2 that our algorithm is customizable through the limit in the enumerative step (2). In this section we describe several alternatives to the enumeration phase. In the enumerative step (1) we have chosen from largest to smallest. Instead, we can choose from smallest to largest. On average, we found that the former has better run time performance. In the enumerative step (2) we prefer to enumerate the combinations of smallest terms first. Instead, we can choose to enumerate those of largest terms first. On average, we still found that the former has better run time performance. The enumerative steps (1) and (2) can even be combined into one step to make the enumeration phase simpler. That is, we can exhaustively enumerate all possible combinations of terms from with an appropriate sum subject to the requirement 3 about the number of terms in . (Or, to make it more naive, we could exhaustively enumerate all possible combinations of terms from with the sum .) With these schemes we still face the choice of enumerating largest terms first or smallest terms first. On average, the choice of “smallest terms first” still enjoys better run time performance. However, in order to achieve similar low error rates in these alternative schemes with this choice of “smallest terms first,” the limit on the number of combinations to be generated will usually have to be much larger than the chosen limit in our design in Section 2, causing these alternatives to have much worse run time performance on those instances that require the second phase to decide. If no limit is placed on the number of combinations to be generated, these alternatives will all produce correct outputs always. Nevertheless, the run time performance could become terrible. For example, for some hard instances with length from 100 to 300, it could take days to detect a successful candidate bipartition for “yes” instances and tens of days to decide for “no” instances when unlimited is chosen, a clear evidence of exponential run time behavior. For longer hard instances in the range , these more naive enumeration phases with unlimited might take years or longer time to finish. As mentioned before, our algorithm always gives the correct conclusion for “no” instances. But it could give an incorrect output for some “yes” instances depending on the limit set in the enumeration phase. This kind of behavior can be contrasted with some randomized algorithms. The error our algorithm might make is fixed and it comes from the fact that not all potentially bipartite graphical degree sequences exhibit the kind of pattern that can be captured by the particular “limited” search process of our algorithm. Simply put, our algorithm is deterministic. If it makes an error on an input under a particular setting of , it always makes an error on that input with that setting. If a randomized algorithm makes an error on an input, then it could produce a correct output the next time it runs. Now we comment on the complexity of the decision problem of potentially bipartiteness of graphical degree sequences. It is obviously in . We don’t know whether it is in co- or in , nor do we know whether it is -complete. Whenever our algorithm reports an input as an “yes” instance, it can also output a successful candidate bipartition. We are not sure if this is necessary for this decision problem. For example, the well-known decision problem of primality of integers can be decided in polynomial time AKS2004 . However, a “composite” output does not come with a prime factor. It is known from the prime number theorem Hadamard1896 ; Poussin1896 that almost all integers are composite. In this sense, the polynomial solvability of the primality testing problem seems intuitive. We would also like to compare this problem with the decision problem of whether a given graph is of class 1 or class 2, i.e. whether its edge chromatic number is equal to or where is the maximum degree of the given graph. It is known from ERDOS1977 that almost all graphs on vertices are of class 1 as grows towards infinity. However, it is -complete to decide whether a graph is of class 1 or class 2 Holyer1981 . These facts sound more unintuitive. It is almost certain from our experimental results that the proportion of zero-free graphical degree sequences of length that are not potentially bipartite approaches 1 as grows towards infinity. Is it possible that the decision problem is actually in ? Or, could it be that some hidden classes of hard instances are overlooked by our experiments and the decision problem is actually -complete or -intermediate, should . In this paper we dealt with the decision problem of whether . In the case that is not potentially bipartite and it is desired to compute , we can decide, for each successive fixed , whether there is a -colorable realization of , until the answer becomes “yes.” We conjecture that each of these decision problems is -complete. 6 Summary and directions for future research We presented a fast algorithm to test whether a graphical degree sequence is potentially bipartite. The algorithm works very well in practice. It remains open whether the decision problem can be solved in polynomial time. The complexity of the decision problem whether is also to be resolved. This research has been supported by a research seed grant of Georgia Southern University. The computational experiments have been supported by the Talon cluster of Georgia Southern University. - Manindra Agrawal, Neeraj Kayal, and Nitin Saxena. Primes is in P. Annals of Mathematics, 160(2):781–793, 2004. - de la Vallée Poussin. Recherches analytiques la théorie des nombres premiers. Ann. Soc. scient. Bruxelles, 20:183–256, 1896. - Zdeněk Dvořák and Bojan Mohar. Chromatic number and complete graph substructures for degree sequences. Combinatorica, 33(5):513–529, 2013. - Paul Erdős and Robin J. Wilson. On the chromatic index of almost all graphs. Journal of Combinatorial Theory, Series B, 23(2):255–257, 1977. - O. Favaron, M. Mahéo, and J.-F. Saclé. On the residue of a graph. Journal of Graph Theory, 15(1):39–64, 1991. - D. Gale. A theorem on flows in networks. Pacific J. Math, 7(2):1073–1082, 1957. - Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., New York, NY, USA, 1979. - J. Hadamard. Sur la distribution des zéros de la fonction zeta(s) et ses conséquences arithmétiques. Bull. Soc. math. France, 24:199–220, 1896. - I. Holyer. The -completeness of edge-coloring. SIAM Journal on Computing, 10(4):718–720, 1981. - Konstantinos Koiliaris and Chao Xu. Faster pseudopolynomial time algorithms for subset sum. ACM Trans. Algorithms, 15(3):40:1–40:20, 2019. - W. Mantel. Problem 28 (solution by H. Gouwentak, W. Mantel, J. Teixeira de Mattes, F. Schuh and W. A. Wythoff). Wiskundige Opgaven, 10:60–61, 1907. - Owen Murphy. Lower bounds on the stability number of graphs computed in terms of degrees. Discrete Mathematics, 90(2):207–211, 1991. - Narong Punnim. Degree sequences and chromatic numbers of graphs. Graphs and Combinatorics, 18(3):597–603, 2002. - S. B. Rao. A survey of the theory of potentially P-graphic and forcibly P-graphic degree sequences. In Siddani Bhaskara Rao, editor, Combinatorics and Graph Theory: Lecture Notes in Mathematics, vol 885, pages 417–440. Springer Berlin Heidelberg, 1981. - Frank Ruskey, Robert Cohen, Peter Eades, and Aaron Scott. Alley CATs in search of good homes. In 25th S.E. Conference on Combinatorics, Graph Theory, and Computing, volume 102, pages 97–110. Congressus Numerantium, 1994. - H. J. Ryser. Combinatorial properties of matrices of zeros and ones. Canadian Journal of Mathematics, 9:371–377, 1957. - Kai Wang. Efficient counting of degree sequences. Discrete Mathematics, 342(3):888–897, 2019. - Jian-Hua Yin. A short constructive proof of A.R. Rao’s characterization of potentially -graphic sequences. Discrete Applied Mathematics, 160(3):352–354, 2012.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00606.warc.gz
CC-MAIN-2021-39
27,287
81
http://community.linksys.com/t5/Others/WGA54G-v2-and-connecting-it-a-printer/td-p/502166?nobounce
code
03-19-2012 04:03 PM I have a WGA54G v2. and I need use it as wireless access point for a wired printer. I have an apple Time Capsule with a built in router. Anyone know how to get these two to talk to each other ??? 03-20-2012 05:29 PM There are two ways to connect a Linksys router to another router: 1. LAN to LAN – Connecting one of the Ethernet ports (LAN ports) of the second Linksys router to one of the Ethernet ports (LAN ports) of the main router. This connection makes the computers connected to both routers to be on the same LAN IP segment. This will allow sharing of resources within the network. 2. LAN to WAN – Connecting one of the Ethernet ports (LAN ports) of the main router to the Internet port (WAN port) of the second Linksys router. This connection makes it easier to identify which router the computers are connected to since they will have different LAN IP segments. Thus, computers that are connected to router A will not be able to communicate with router B, and vice versa since they are two different networks. Here is the link for the configuration - http://www6.nohold.net/Cisco2/ukp.aspx?vw=1&docid=d4bfc9fde5284a4b845c9057ffbeb644_4735.xml&pid=80&r...
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863939.76/warc/CC-MAIN-20180620221657-20180621001657-00238.warc.gz
CC-MAIN-2018-26
1,188
9
http://use.perl.org/use.perl.org/_demerphq/firehose/index.html
code
Comment: Re:Straw men lack JFDI (Score 1) on 2009.03.22 8:14 I've been watching how some other projects manage this kind of stuff, and I have a few thoughts. First, you were doing maint releases, and backporting from dev into maint. I dont think that many projects have high frequency maint releases, in most projects maint releases are instead "just enough for the release to not be a problem" and only made as absolutely necessary. If a project has high frequency releases its at the bleading edge not in maint. Second, IMO backporting from dev to maint AT ALL only makes sense if dev and maint are relatively in sync. The further ahead dev gets, the more likely that you have to major work to back port a patch. Now it seems to me that doing regular dev releases is MUCH less of a burden that doing regular maint releases. I would hypothesize as well that regular dev releases would make it eaiser for maint to stay synced themselves. As smaller dev releases would mean more major versions, and more major versions would mean that the bleading edge would tend to be closer in terms of code to its predecessors, which in turn should reduce the back port burden of important fixes. So I guess while I am not going to argue with you about the burden of our current processes, its not clear to me that it isn't a self fulfilling prophecy. There would be no reason to do heavy work in a maint track if the dev track was moving faster. Personally Im inclined to think we could accelerate our dev releases a LOT. In fact I think one is long overdue already. Although I know perfectly well how busy Rafael is and that rolling a release right now is probably not something he really wants to do I am somewhat of the opinion that we should make sure that we have all the major platforms passing test for a few days and announce 5.11.1 as soon as we can. And then do our best to announce 5.11.2 soon after. Who cares if that means that we do a LOT of minor releases in the 5.11 line. At some point or another well decide its good enought to go out the door, and we wont be talking about maintaining 5.10 well be talking about maintaining 5.12. Anyway, I guess my point is that making the caboose more efficient isnt going to make a train faster. Making the engine go faster will. And we Would I be correct in thinking that leaving the quality of the latest blead aside that to release 5.11.1 is a matter of creating a tag, rolling and uploading a CPAN bundle and sending out an email?
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886739.5/warc/CC-MAIN-20180116204303-20180116224303-00326.warc.gz
CC-MAIN-2018-05
2,476
9
https://www.vingle.net/posts/1662782
code
So as you guys might know. Today is the last day to join teams for the Monsta X competition. If you're interested in joining my team (Minhyuk) then please comment. I would love to have you. I will need all the help I need. When all my teammates are accounted for I will create a Vingle chat so we can all plan different cards so we can win this thing. If you have no idea what I'm talking about just go to Vatch's card >>HERE<< IT WILL EXPLAIN EVERYTHING!
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826842.56/warc/CC-MAIN-20181215083318-20181215105318-00448.warc.gz
CC-MAIN-2018-51
455
2
https://fawalltweakunes.gq/oracle-adf-enterprise-application-development.php
code
Oracle ADF Enterprise Application Development - Made Simple Book file PDF easily for everyone and every device. You can download and read online Oracle ADF Enterprise Application Development - Made Simple file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Oracle ADF Enterprise Application Development - Made Simple book. Happy reading Oracle ADF Enterprise Application Development - Made Simple Bookeveryone. Download file Free Book PDF Oracle ADF Enterprise Application Development - Made Simple at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Oracle ADF Enterprise Application Development - Made Simple Pocket Guide. In his spare time, Sten enjoys triathlon and completed his first Ironman in Sign In Register. Toggle Nav. Browse All. All Books. All Videos. Expert Insight. By Technology. Machine Learning. Artificial Intelligence. Deep Learning. Internet of Things. Mobile Application Development. Penetration Testing. Data Analysis. Graphics Programming. Kali Linux. Front End Web Development. ADF Enterprise Application Development - Made Simple (Book Review) Learning Management System. Progressive Web Apps. Server Side Web Development. Web Programming. Full Stack Web Development. Responsive Web Development. Web Design. Web Services. Business Intelligence. Data Mining. Data Science. Database Administration. - More titles to consider! - Menu di navigazione; - Advances in image processing and understanding: A festschrift for T.S. Huang; - Call: 0203 142 5164. - Oracle Application Development Framework. - Schooling, society and curriculum. Big Data. Computer Vision. Data Processing. Data Visualization. Database Programming. Natural Language Processing. Download Oracle Adf Enterprise Application Development Made Simple 2011 Cloud Computing. Cloud Native. Configuration Management. Distributed Computing. Infrastructure Management. IT Certifications. Network Security. Service Oriented Architecture. Systems Administration. Cloud Platforms. - Oracle ADF Enterprise Application Development: Made Simple? - Stay ahead with the world's most comprehensive technology and business learning platform.? - Modern Portfolio Theory and Investment Analysis (9th Edition)? - Professional Web Parts and Custom Controls with ASP.NET 2.0. Cloud Foundry. Application Development. Application Testing. Business Process Management. Design Patterns. Functional Programming. Geospatial Analysis. GUI Application Development. High Performance. Object Oriented Programming. Programming Language. Android Development. Augmented Reality. Cross Platform Mobile Development. Enterprise Mobility Management. Windows Mobile Programming. Oracle ADF Survival Guide - Sten Vesterli - Häftad () | Bokus Operating Systems. Windows Mobile. Game Artificial Intelligence. - From the Black Sea through Persia and India? - Oracle ADF Enterprise Application Development—Made Simple by Sten E. Vesterli. - Oracle ADF Enterprise Application Development – Made Simple Second Edition. - Eiffels Tower: The Thrilling Story Behind Pariss Beloved Monument and the Extraordinary Worlds Fair That Introduced It. - The Formation of Critical Realism: A Personal Perspective (Ontological Explorations)? - Language Issues and Education Policies: Exploring Canadas Multilingual Resources (Eng Lang Teaching Documents, Vol 119). Game Design. Game Optimization. Game Scripting. Game Strategy. Mobile Game Development. Virtual Reality. Game Engines. Embedded Systems. Home Automation. Industrial Internet of Things. IoT Development. IoT Security. Single Board Computers. Wearable Tech. Application Security. Cloud Security. Information Security. Enterprise and development Malware Analysis. Reverse Engineering. Web Penetration Testing. Audio Processing. Document Preparation.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00072.warc.gz
CC-MAIN-2020-29
3,978
40
https://blog.recursiveprocess.com/2011/11/29/ib-thoughts/
code
So our school is looking into implementing the International Baccalaureate program at our school in addition (?) to our AP programs. I know little about IB, and I’m interested in your knowledge and opinions on the IB program. Lemme have it. Good stuff? Things to watch out for?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00652.warc.gz
CC-MAIN-2023-50
279
1
https://jobs.cisco.com/jobs/ProjectDetail/Senior-Software-Engineer-Backend-Developer-Java-Go-Python-JavaScript-Unix/1293010
code
Location:Bangalore, Karnataka, India Area of InterestEngineer - Software Technology InterestService Provider What You'll Do? You will play a key role on a next generation Service Provider Network Automation Infrastructure team, as a Software Engineer, working in a fast pace environment, developing Cisco's Network Automation Infrastructure Software/Solution with high scalability, performance, openness/extensibility and quality. You will have an opportunity to work well with various development teams across Cisco and drive the development features from concepts to reality. Additional responsibilities include defining API's, working on code reviews, code merges, static analysis etc. Innovation, thinking creatively and meaningful the status quo are helpful. You will collaborate with multiple Cisco Technology Groups and work cross functionally with Product and Program Management, Engineering teams, Quality Assurance Teams, etc. You should have proven hands-on experience developing software with Continuous Integration / Deployment. - Provide technical leadership in developing high scale infrastructure that champion onbox/offbox applications. - Architect/design and develop an ecosystem around Streaming Telemetry. - Work in a startup like environment and co-develop software with some of the largest SP customers in the world. - Develop automated tests to implement the software modules. - Conduct and participate in peer design/code reviews. - Deliver functional/design specifications. Who You'll Work With? You will work with the Service Provider Network Automation team. Streaming Telemetry enables unprecedented levels of visibility into the network and transforms network monitoring into a big-data problem. It opens up new capabilities like near realtime traffic engineering, fault detection/prediction and automated remediation. Service Provider Network Automation team's mission is to harness the platforms and tools available today to transform network data into network intelligence and address key customer pain points by enabling Network Change automation and KPI monitoring. The rollout of 5G infrastructure by the SPs in a large scale would demonstrate this infrastructure. You will work with a team of very best-in-class engineers who have between them tens of years of experience. You will also socialize with other groups as product management, marketing, sales, customer support, advanced services. These groups will engage with you to seek mentorship on features, design, and help with answering technical questions. You will also get mentorship from them that will help you design good software in tune with customer expectations Who You Are? You should meet these requirements. - 8+ years of work experience in software development. - Proficient in Java/Go. - Experience with RESTful API’s, HTML, XML, JSON encoding, GPB (Google protocol buffers) - Experience working with NoSQL databases - Experience working with CI/CD tools like Git, Jenkins - BS degree in CS/EE/CE or technical equivalent. Meeting these requirements give you an advantage - Prior experience with time series and Graph databases (Prometheus, Neo4j) - Experience working with Virtualization technologies like VMware/Openstack - Linux Containers/Docker - Micro-services infrastructure - Postgres, Kafka, RabitMQ - Familiarity with Networking Concepts At Cisco, each person brings their different talents to work as a team and make a difference. Yes, our technology changes the way the world works, lives, plays and learns, but our edge comes from our people. We connect everything – people, process, data and things – and we use those connections to change our world for the better. We innovate everywhere - From launching a new era of networking that adapts, learns and protects, to building Cisco Services that accelerate businesses and business results. Our technology powers entertainment, retail, healthcare, education and more – from Smart Cities to your everyday devices. We benefit everyone - We do all of this while striving for a culture that empowers every person to be the difference, at work and in our communities. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390758.21/warc/CC-MAIN-20200526112939-20200526142939-00380.warc.gz
CC-MAIN-2020-24
4,376
37
https://wiki.folio.org/pages/diffpagesbyversion.action?pageId=96410045&selectedPageVersions=6&selectedPageVersions=5
code
- Development progress - Demonstration of the new managed/shared dashboard functionality - If time allows, demonstration of the Checklist functionality developed in the Open Access request management with an opportunity to discuss whether similar functionality would be useful for ERM - Sprint 149 - finality sprint before the Nolana release - working current features complete for the nolana release which include the dashboard functions - some work is going on the backend witch relates to dates we harvest from GOKb, a feature that enbable updates that come true effectivly at the moment sometimes. Dates, when updates in GOKb doesn't necessarily lead when updated in the FOLIO-knowledgebase. That work is happening in these sprint. Demonstration of the new managed/shared dashboard functionality In Nolana there are the following changes in the dashboard app: - Support for multiple dashboard per user - naming dashboards - ordering and navigating between dashboards - setting a default dashboard - deleting dashboards - Ability to grant other users access to your dashboards with different levels of access - View: dashboard is added to the user's list of dashboards and the user can view the dashboard, but not edit nor grant other users access - Edit: dashboard is added to the user's list of dashboards and the user can view and edit the dashboard, but not grant other users access - Manage: dashboard is added to the user's list of dashboards and the user can view, edit and grant other users access to the dashboard There is an extended dashboard actions menu for the new functionality and a list of users to manage the user access to the dashboards. If you have two dashboards yo can switch between them, for example one for agreements and one for licenses. Sara: Is the dashboard now or in the future going to be rolled out for all apps? Owen: In Nolana we don't have any additional apps that are going feed into the dashboards. There are plans to extend this to other apps and there is now a feature, Khalilah have been created which do exactly that. We have had some discussion with Khalilah about this and about we be necassary to enable this. If its possible it will be on the Orchid release Robert in the chat: These are great developments. It seems the pill menu for dashboard names would make long names and multiple names awkward. What happens when the length and number of names stretches beyond the width of the pane? Owen: At the moment I think it just gets longer and longer, a kind of list of these names. What we would like to do is implement a drop-down menu for the other names and when people really manage a lot of dashboards we have to think about a searchable or filterable list of names Von Sara Colglazier (MHC/5C) an alle 02:08 PM
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511023.76/warc/CC-MAIN-20231002232712-20231003022712-00127.warc.gz
CC-MAIN-2023-40
2,765
24
https://www.adplugg.com/support/question-answer/getting-inconsistent-results-and-registry-id-x-not-found-errors-on-vue-spa
code
Hi! I was wondering if I could get some help getting AdPlugg working on my Vue SPA application. I followed the solution described in another article here for SPAs, but the ads load inconsistently (often not at all) and throw the error above (x is a single-digit integer that varies) when they do not load. UPDATE: Looping back to update this. This has been fixed since November 1st, 2022. You can now load the ad.js script multiple times on the page without issue (though it's best to only load it once if possible). Sorry for the trouble. This is probably being caused by loading AdPlugg's ad.js script multiple times on the page. You might have installed the SDK/Snippet version of the script and then added some Advertiser Tags (which also include the main ad.js script) or you might have put multiple Advertiser Tags on the page, or you might have put the SDK/Snippet on the page twice. Unfortunately, AdPlugg isn't able to handle this situation. The script initializes, registers your Ad Tag and then by the time the tag is being filled, your second instance of the script starts loading and wipes out the registry from the first load of the script. This is an outstanding bug, the script should be smart enough to handle it. We have it on the list to be fixed but in the meantime, just ensure that you aren't loading AdPlugg multiple times on the page.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100489.16/warc/CC-MAIN-20231203062445-20231203092445-00386.warc.gz
CC-MAIN-2023-50
1,358
5
https://www.advaiya.com/sitecore-helix-and-habitat-creating-a-new-module-project/
code
Helix is a set of overall design principles and conventions for Sitecore development. Helix gives developers some architectural guidelines that have emerged out of experience from past Sitecore implementations The core focus of Helix is the separation of functionality into modules – a conceptual grouping of all assets related to a single business requirement and their organization into layers. This organization is beneficial because it reduces dependency between modules, minimizes side effects and additional efforts in later iterations. Helix defines some rules on how to organize the filesystem and items in the content tree. When we follow these rules, we can be sure that other developers can easily navigate the solution we built. Habitat is nothing but a way on how to implement Sitecore solution based on Helix design principles. Habitat is a real Sitecore project, built using the overall design principles and conventions provided by Helix. Habitat solution for Sitecore v9 is available on Github, and Habitat demo site is also available online for reference. Sitecore Habitat solution can be set up by following the steps which are explained in many articles and videos available online. This article describes the steps to add a new module/project in Sitecore while following Helix principles and using Habitat solution. Creating a new module/project Here we are taking an example of creating a new project by the name Ports in the Sitecore Habitat solution project section. Any module in Feature and Foundation can be created in the same way: 1. Create a folder structure: a. Open windows file explorer and navigate to the path for existing Sitecore solution b. Inside the above path, create the following folder structure 2. Create a new project in Visual Studio: Open Visual Studio 2017 in administrative mode a. Open the Sitecore Habitat solution from the location b. Create a solution folder by the name “ports” inside “project” section c. Create a project in the newly created “ports” folder: i. Right click “ports”, add à new project ii. Use the template “ASP.NET web application” iii. Select the project location inside the file system as below C:VSTSAdvaiyaSitecoreCodeAdvaiyasrcProjectPorts iv. Name the project as “code” to maintain the correct folder structure. We will rename it to project in the next steps. v. Click next, now select “Empty” project template and check the “MVC” checkbox vi. Click “OK” 3. Setting properties to the new project: a. Rename the project to Sitecore.Ports.Website b. Go to the project properties and set the assembly name and namespace to Sitecore.Ports.Website c. Update the project framework to “.NET Framework 4.6.2” d. Delete the folders from Project App_Data, App_Start, and Global.asax e. Go to properties “web.config” file and “/views/web.config” files, and set “Build Action” property, to “None”. By setting this property you will not publish these files with the project. 4. Add configuration files: a. Create the following folder structure in the Visual Studio under Sitecore.Ports.Website as below: /App_Config/Include/Project and /App_ConfigEnvironment/Project 5. Create a publish profile: a) Copy the”/Properties/PublishProfiles” folder from the other project. b) Paste it at the same place in the new project Sitecore.Ports.Website project. 6. Copy required assemblies from another project: a) Unload the project, edit the project file. b) Unload any other project, edit the project file. c) Compare both project files and add assemblies in the new project from another project. d) Comment target location at the end of the project file, comparing with another project. e) Reload the projects. f) Re-Build the new project, and it will add all the required assembly references now. 7. Create required folders in CMS a) Go to Sitecore CMS, enter login details and open content editor b) Create required project folder inside: vi. Placeholder settings 8. Check configurations and sync: a) Open Unicorn.aspx page b) Check if all configurations are done for the newly created project/module c) If not, then check for errors and resolve them one by one d) Re-serialize newly created project/module e) Sync project/module The newly created project or module is ready. If you need to create a new project for a website in multisite solution, then IIS binding and host file entry for the new website has to be done. If you need any assistance for Sitecore implementation or in the creation of a project/module our team of experts can help you.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00000.warc.gz
CC-MAIN-2023-50
4,570
52
https://link.springer.com/article/10.1007%2Fs10849-011-9146-9
code
Partly Free Semantics for Some Anderson-Like Ontological Proofs - First Online: Anderson-like ontological proofs, studied in this paper, employ contingent identity, free principles of quantification of the 1st order variables and classical principles of quantification of the 2nd order variables. All these theories are strongly complete wrt. classes of modal structures containing families of world-varying objectual domains of the 1st order and constant conceptual domains of the 2nd order. In such structures, terms of the 1st order receive only rigid extensions, which are elements of the union of all 1st order domains. Terms of the 2nd order receive extensions and intensions. Given a family of preselected world-varying objectual domains of the 2nd order, non-rigid extensions of the 2nd order terms belong always to a preselected domain connected with a given world. Rigid intensions of the 2nd order terms are chosen from among members of a conceptual domain of the 2nd order, which is the set of all functions from the set of worlds to the union of all 2nd order preselected domains such that values of these functions at a given world belong to a preselected domain connected with this world.
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423512.93/warc/CC-MAIN-20170720222017-20170721002017-00106.warc.gz
CC-MAIN-2017-30
1,203
3
https://www.mobileread.com/forums/showpost.php?p=1049172&postcount=27
code
Originally Posted by catharsis Thanks bran. After running losetup -a I discover that loop5 was the first free loop and I could mount and extract the file. Glad to hear you understood my hint about looking for a free loopback device. I'm also glad to her you got it to work! Have Fun with the dr800+
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120694.49/warc/CC-MAIN-20170423031200-00535-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
298
4
https://sqlpass.eventpoint.com/topic/details/BID382S
code
Implementing Reporting Services in SharePoint Integrated Mode Reporting Services is a powerful tool for building and deploying reports. But, what happens when you need more than just reports? If you need to build a content sharing portal along with your reporting solution SharePoint integration might be for you. Come learn about how to successfully implement Reporting Services in SharePoint integrated mode and some of the pitfalls that you might run into along the way. Senior Program Manager - SQLCAT Chuck Heinzelman is a Senior Program Manager with the SQLCAT team at Microsoft. He holds a BS degree in Management Computer Systems from the University of Wisconsin - Whitewater and is working on his MBA from the University of Wisconsin - Milwaukee. He has been working with SQL Server since 1998 and also has experience developing Windows and Web applications using Visual Basic and C#. Chuck has been involved with PASS since 2000, serving in many capacities, including author, editor and past member of the Board of Directors. He is also a charter member of the Wisconsin SQL Server User Group. Register Now for PASS Summit 2010!
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00011-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,138
5
http://thegoan.com/
code
I’m happy to announce the release of FireBible 2.0; this version will work on Firefox versions 26 and up, on Windows and Linux. FireBible is now based on native libraries and no longer requires Java; it’s much easier to install and far more robust. If you already had FireBible installed from this site, check for an update to get the new version or just hit the FireBible page to install it. I’ll be updating the AMO version shortly, though it might take some time for this version to be accepted. - Alternate versification (av11n) support. You can now read Catholic and other Bibles with versification that differs from KJV canon. Try the CPDV or DRC Catholic bibles. - FireBible will download the native libraries it needs on Windows or Linux, it’s marginally faster than previous versions and much more robust (no more issues due to Mozilla’s security policies or odd Java distributions and browser plugins). Deficiencies wrt. FireBible 1 - Does not work on OS X, the native libraries that FireBible uses haven’t been compiled for OS X yet. - No search functionality. - No module manager, though you can install from ZIP files easily through a new Install Modules button on the FireBible toolbar. I’m working on the search and module manager, I did not want to hold up the release further because of these missing features (hence the beta tag). If you’re a C++ developer with OS X experience and would like to help compile the native libraries FireBible needs for OS X, please get in touch. Firefox kept changing their security policies as far as the use of Java in extensions was concerned and I had to keep duct-taping fixes in place that would keep FireBible working. Finally, they made a change that broke FireBible outright and there was no way to keep it going with Java; believe me, I tried. I wrote about these issues in the past and had been thinking about switching from JSword (Java library) to Sword (native library) for some time. When David Haslam pointed me to XULSword, I knew it was technically capable and decided to make the move. FireBible 2.0 has been in development for ~ 1.5 years, the fundamental change from JSword to native Sword libraries wasn’t that hard (thanks to XULSword) but most of FireBible had to be rewritten. Getting all the functionality working has taken much longer than I anticipated – FireBible is weekend work and I got married last year! It’s still far from perfect, but definitely quite usable in its current form. Any feedback would be appreciated. Managed to catch a couple of Kites feeding as I walked past my balcony. So this video has been taken through my window, from a distance of just over three feet; view at high quality if you can. Chuffed to bits that I get to experience so much “Kite Stuff” first hand – I’m sure these two are feasting on “Shivaji Market beef”! Most of my friends know I stay next to St. Vincent’s High School in Camp, Pune. This is where I studied from class VI to XII after we moved here from Bombay, and where I’ve lived for just over two decades. Our brass band has cyclic quality which changes over the years; it improves as the students get better but quality can drop when there is a sudden influx of new guys. The last three to four years have been particularly good though. Sometimes the band beats your alarm clock to the punch, which is mostly pleasant, but I wish they would learn more tunes! I released FireBible 1.4 early this morning, this release fixes issues with recent Java 7 updates and Firefox 16 and above, as discussed in my last post. I also threw in a few minor bug fixes. A longer term update: - FireBible still won’t work with the IcedTea Java plugin on Linux (unless you remove almost all security settings, not recommended). So you will still need an Oracle Java installation on Linux for FireBible to work. - Still no av11n (alternate versification) support as I need to wait for this to be added to the JSword library. Full support for deuterocanonical books (including the Catholic Bible :() therefore not present yet. Hopefully soon. - Ubiquity support still cool (I consider this to be one of FireBible’s best features) – but you need the privately maintained Ubiquity 0.6 version or higher; you can get that here (download the file and select it in the File > Open dialog to install). - Will investigate the viability of switching to the native Sword libraries as opposed to Java based JSword: - No Java installation required, extension should work out of the box. - Security issues keep cropping up with “Java in the browser”. There are restrictions that are either imposed by Firefox itself or changes in Java internals that affect “not so common” use of Java. Hopefully we’ve seen the last of these. - av11n support already present in native Sword libraries. - Will lose JSword specific UI such as the module installer and settings. - Implementation will be harder and take time which I do not currently have. - Will need to build separate versions of the extension for each platform (currently not required due to the use of Java libraries). Unfortunately there have been a couple of issues rendering FireBible defunct on recent versions of Firefox. - First, there was a security vulnerability in Java 7 update 6 and below. This lead to these versions of Java being disabled; if you updated your Java installation, it would be enabled again, but loading text in FireBible fails with org.xml.sax.SAXException: Could not compile stylesheet javax.xml.transform.TransformerConfigurationException: Could not compile stylesheet at org.crosswire.common.xml.TransformingSAXEventProvider.provideSAXEvents(TransformingSAXEventProvider.java:174) at org.crosswire.common.xml.XMLUtil.writeToString(XMLUtil.java:85) at … I’m not too sure if this was caused by changes in Java 7 update 7 (or later) or if it was the switch to JAXP 1.4.6 in Java 7 update 4 – nevertheless, you are now forced to use a version of Java in which the above error occurs. - The second issue is a change in Firefox version 16 (16.0.1 is now current) where the “java” global variable was removed. I was warned by the addons team in advance, but failed to make changes in time (however, we’d still be stymied by the above problem). The good news is that I do know how to solve both problems, but am currently looking for a better solution for the first of these. I’ll try to release a fix ASAP. This extension has always been a technical challenge, which is fun most of the time, but the battle with Java in the browser has been a PITA – it still won’t work with IcedTea due to more permission problems. I’m wondering if I should go native instead and switch from JSword to Sword, the crossover will be quite expensive in terms of time and effort, but will probably add long term stability to FireBible. A Java installation (specifically, the Oracle one) would no longer be required and we wouldn’t be plagued by these incessant security issues.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121305.61/warc/CC-MAIN-20170423031201-00412-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
7,001
34
https://deskshare.com/help/AFM/WhatIsAutomatedTransfer.aspx
code
Automated Transfer Rules Automated Transfer Rules Automated Transfer Rules tells Auto FTP Manager what operations to perform while executing an Automated Profile. While creating a profile, define all the rules and let Auto FTP Manager execute the profile according to these rules. In the Automated Transfer rules, you can: - Select the actions to perform such as Transfer, Transfer and move, Transfer and Delete, Synchronize or Delete. Delete options helps you to choose which files - Rename the files after transfer on destination using rename rules. - Decide the Conflict resolution method that needs to be applied when file with same name is already present at the destination location. - Include only important files by configuring filters. Auto FTP Manager will choose only these files for the automated process. - Run the same profile at a particular time by configuring the schedule. - Select actions to perform before or after completion of transfer or turn off computer once the automated transfer is completed. - Save and send the transfer report once Automated process is completed. To add different Automated Transfer Rules to your automated transfer profile, right click the profile and select Edit Connection Profile >> Transfer Rules. You can specify any or all of the following Action Rules: In an Automated Transfer Profile, Action Rules tells Auto FTP Manager what operation to perform and to which location. If the option Transfer files from <Folder 1> To <Folder 2> is checked then all files present in Folder 1 are transferred to Folder 2. You can even choose to Delete files from <Folder 1> After Transfer or Move the files to different location of your choice. You can choose to delete specific files after transfer. The rule Transfer files from <Folder 2> To <Folder 1> transfers files present on Folder 2 to Folder 1. You can choose to Move the files that are transferred to another folder by clicking the Location button. If you do not require the transferred files anymore, Check After Transfer option and select delete option from the dropdown. When you run an automated profile with Synchronize <Folder 1> and <Folder 2> option, it makes sure that the files present in both the folders are exactly same. If the files are not present in one folder then they are transferred from the other folder. This option is automatically selected when you check both the previous action rules. You can have Auto FTP Manager delete files from the target folder if they are not present in the source folder. For instance, if Folder 2 contains a file named "Extrafile.txt" which is not present in Folder 1, then selecting the option Delete file from <Folder 2> if not present in <Folder 1> deletes the file Extrafile.txt when you run the Automatic Transfer Profile. The rename rules changes the file name on destination <Folder 2> after transfer as per the configured rules. All the selected rules will be applied to filename sequentially. Note: <Folder 1> and <Folder 2> will be replaced by your computer's name and the IP address of the server. You must note that the actual names of <Folder 1> and <Folder 2> will be used by the program. Suppose, you want to transfer a file from your PC to the server. If the file is already present on the server, there will be a conflict. Auto FTP Manager has a way to resolve this conflict easily with the help of Conflict Resolution rules. Auto FTP manager provides the following conflict resolution options for files: - Append Date and Time: The transferred file is renamed with the current date and time appended to its name, thus making the file transfer - Overwrite always: The file in the destination folder will be replaced by the one in the source folder. - Overwrite if date is newer: If the file being transferred was created after the existing file, then the old file is replaced with the new - Overwrite if size is larger: If the size of the transferred file is greater than the existing file, then the smaller file is replaced by the - Overwrite if date/size is different: The transferred file replaces the existing one if their size and modification dates do not match. - Skip: The file will not be transferred which means no action will Note: Same Conflict Resolutions will also applied to the location you choose to move files after transfer. File and Folder Filters To prevent some files and folders from transfer, you can exclude them or add filters on Modification Date, File Size, File Name and File Type. With Auto FTP Manager, you can schedule sets of transfers to run anytime you wish. Schedule each profile to start transfer automatically, either on a Daily, Weekly or Monthly schedule, for a one-time event, automatically on program launch or on Folder Change. You may want a specific action to be executed before or after a file transfer. You can configure actions such as run a particular program, close the application or turn off a computer. You can view the execution status of all the configured actions. Configured actions and time of execution will be shown as Once the Automated Profile is executed, you can set Auto FTP Manager to perform following operations: - Save transfer report: You can choose when the transfer report should be generated. You can save a complete report in the form of a html document listing all the important events about the transfer. The report can be saved to your PC, a network drive, or an FTP server. In addition, there is also another option which allows you to send the report as an email. When you click configure, the following dialog will open: You can choose to save every report or just the one that was created when a file transfer failure occurred. Save location: Decide the location for saving the Transfer reports. You can choose any one or all of the following locations PC or network driveBy default, the transfer reports will be saved on your PC in Documents >> Auto FTP Manager >> Transfer Reports. You have the option to change the default folder by clicking on Browse. FTPA remote location is always the safest option to save the important data. Your transfer report can be saved on FTP server. You just have to specify your FTP connection details and your transfer report is protected. FTP connection details include the server name or the FTP address, username, password, port number, and destination folder on the server. EmailIf you chose to send the report as e-mail, enter the mail details. Mailing Information requires you to specify the email addresses of the sender and the receiver. You can include multiple receivers separated by a comma. When saving transfer reports If you transfer files more frequently and are interested only in the latest transfer report, then choose the option Overwrite Transfer reports. Every time you perform automated transfer actions, the previous report is replaced with the new one. You can keep a track of all the reports by not deleting the previous one. Enable the option Maintain history of Transfer reports by appending current date and time.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817765.59/warc/CC-MAIN-20240421101951-20240421131951-00198.warc.gz
CC-MAIN-2024-18
7,024
122
https://www.newma.co.uk/android-app-developers-bluetoothadapter-cgpt
code
Android BluetoothAdapter is a class in the Android SDK that provides a way to interact with Bluetooth hardware on an Android device. With the BluetoothAdapter class, you can perform operations such as discovering and pairing with Bluetooth devices, creating a Bluetooth socket for data transfer, and managing the local Bluetooth adapter. Some of the common methods provided by the BluetoothAdapter class are: enable() and disable(): These methods are used to enable or disable Bluetooth on the device. startDiscovery(): This method starts a scan for nearby Bluetooth devices. cancelDiscovery(): This method cancels an ongoing scan for Bluetooth devices. getBondedDevices(): This method returns a set of Bluetooth devices that are paired with the local device. createRfcommSocketToServiceRecord(): This method creates a Bluetooth socket for data transfer. To use the BluetoothAdapter class, you first need to get an instance of it by calling the static getDefaultAdapter() method. This method returns a reference to the BluetoothAdapter object that you can use to perform Bluetooth operations. It's worth noting that Bluetooth operations in Android require certain permissions to be added to your app's manifest file, and you'll also need to check if Bluetooth is enabled on the device before performing any Bluetooth-related operations. The Android BluetoothAdapter is a class provided by the Android operating system that allows an Android device to interact with Bluetooth devices. It provides methods to enable/disable Bluetooth, scan for nearby Bluetooth devices, establish connections with those devices, and transfer data between them. Some of the most commonly used methods of the BluetoothAdapter class are: isEnabled(): Returns true if Bluetooth is currently enabled on the device, false otherwise. enable(): Enables Bluetooth on the device. disable(): Disables Bluetooth on the device. startDiscovery(): Begins scanning for nearby Bluetooth devices. cancelDiscovery(): Stops scanning for nearby Bluetooth devices. getBondedDevices(): Returns a set of BluetoothDevice objects representing devices that are already paired with the Android device. createRfcommSocketToServiceRecord(UUID): Creates a BluetoothSocket that can be used to establish a connection with a remote Bluetooth device. To use the BluetoothAdapter in your Android application, you need to first obtain an instance of the BluetoothAdapter by calling BluetoothAdapter.getDefaultAdapter(). You can then use this instance to call any of the methods provided by the class. It's important to note that in order to use Bluetooth functionality in your Android app, you'll need to include the BLUETOOTH and BLUETOOTH_ADMIN permissions in your app's manifest file. Bluetooth is a wireless communication technology that enables data transmission over short distances. It was first introduced in 1994 by Ericsson, a Swedish company. Since then, it has become a ubiquitous technology that is used in various devices such as smartphones, laptops, wireless headphones, and smartwatches. In this article, we will explore how Bluetooth works. Bluetooth technology operates using radio waves that are in the 2.4 GHz frequency range. These radio waves have a short range of up to 30 feet (10 meters) and can transmit data at a rate of up to 24 Mbps. Bluetooth technology is based on the principle of frequency hopping spread spectrum (FHSS). In FHSS, the radio waves are rapidly switched between different frequencies, which reduces interference from other wireless devices. The Bluetooth protocol uses a master-slave architecture. In this architecture, one device is designated as the master, while the other device is designated as the slave. The master device initiates the connection, while the slave device responds to the master's request. The master-slave architecture enables multiple devices to connect to each other simultaneously. When two Bluetooth devices want to connect with each other, they first need to go through a process called pairing. During the pairing process, the two devices exchange information such as their names, security codes, and supported features. Once the devices are paired, they can establish a connection and start transmitting data. Bluetooth devices use a technique called frequency hopping to avoid interference from other wireless devices. Frequency hopping means that the Bluetooth devices switch between different frequencies at a high speed, usually 1600 times per second. This technique helps to avoid interference from other wireless devices that operate in the same frequency range. Bluetooth uses packets to transmit data between devices. A packet is a unit of data that is transmitted between the devices. Each packet contains a header, a payload, and a checksum. The header contains information such as the packet type, the device address, and the payload length. The payload contains the actual data that is being transmitted, such as an audio file or a text message. The checksum is used to ensure that the data has not been corrupted during transmission. Bluetooth devices use a technique called Adaptive Frequency Hopping (AFH) to further improve the reliability of data transmission. AFH uses algorithms to detect and avoid sources of interference, such as other wireless devices operating in the same frequency range. One of the key features of Bluetooth technology is its ability to support different profiles. A profile is a set of rules that define how a specific type of device should behave when it is connected to another device. For example, the Advanced Audio Distribution Profile (A2DP) is used to stream high-quality audio from a smartphone to a pair of wireless headphones. The Hands-Free Profile (HFP) is used to connect a smartphone to a car's Bluetooth system, allowing the driver to make and receive calls hands-free. In conclusion, Bluetooth technology is a wireless communication technology that enables data transmission over short distances. It operates using radio waves that are in the 2.4 GHz frequency range and uses a technique called frequency hopping to avoid interference from other wireless devices. Bluetooth devices use packets to transmit data between devices, and they can support different profiles that define how they should behave when connected to other devices. Bluetooth technology has become an essential feature in many devices, and it continues to evolve to meet the changing needs of users.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817187.10/warc/CC-MAIN-20240418030928-20240418060928-00414.warc.gz
CC-MAIN-2024-18
6,450
30
http://stackoverflow.com/questions/9028753/jquery-enclose-image-in-a-div-with-a-dynamically-set-width
code
EDIT 1/27/12 11:32am - I believe I have fixed all issues associated with this script. I need to apply a CSS class to each image I want to resize depending on if I want to float it left, right or have it centered, but I can live with that. See my latest explanation and demo at http://evanwebdesign.com/responsive-web-design/example5.html Edit 1/27/12 10:50am - There is still a problem with this script. See my comments below. Working on a fix. EDIT 1/27/12 9:00am: I now have this problem nearly complete. See my example at http://www.evanwebdesign.com/responsive-web-design/example3.html and compare with my original examples I currently have an example of a responsive image that adjusts depending on the width of the browser window posted at http://evanwebdesign.com/responsive-web-design/example.html. EDIT 1/16/12: I have made a more elaborate example that (hopefully) makes my question easier to understand at http://evanwebdesign.com/responsive-web-design/example2.html But suppose I have multiple images within my #content div? And suppose they're all different widths? I don't want to manually calculate width values and create enclosing divs with custom classes for all of my images! Instead I want to write a jQuery script that will do the following: Go through all images within the #content div Get the width of each image (I realize that I need to use $(window).load() for this otherwise I will wind up with an image value of 0) Take the image's width in pixels and divide it by 900. (900 is the arbitrary max-width pixel value of the #content div after the padding has been applied.) Take this new number created from the previous step apply it as a percentage to the width of a new containing div around the image. The goal is to dynamically create the .home_photo div for each image within the #content div. I don't care if the CSS is written inline, as the markup will never appear in the HTML. Please let me know if this question is clear, or if there is anything else I can do to better explain myself. I know there are a lot of steps involved with this question. Thanks in advance to everyone for your help!
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00191-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
2,129
14
https://community.khronos.org/t/opengl-windows-vs-linux/36768
code
Not quite sure where this belongs so soz if I’ve put it in the wrong forum! I was just wondering if there are any good resources out there that compare the performance of OpenGL systems on Windows systems vs Linux ones? Or failing that any opinions you might have on OpenGL’s performance?! Assuming that there is a difference My untested opinion is that you’ll get results that are similar on Linux to those on Windows where you have a driver that’s supported by the manufacturer. Here’s an interesting article on driver architecture from NVIDIA for example and the shared codebase they use across platforms: Here’s an interesting review of some cards on Linux with benchmark results but it’s a year old now and was using 2.2 GHz CPUs: Compare with Windows results here: Make sure you account for the different CPU speeds and compare the Viewperf 7 results not viewperf 6.1.2. If you google for: site:www.specbench.org viewperf results Linux You’ll find a few results you can compare and they’re usually pretty close once you account for other differences like CPU. on linux, graphic cards that are faster than others on windows, tend to get slower than those . At least, so does my GF 5900 XT. It is slower than the 5700 non-ultra I had before, you can find a thread about that in the linux forum. What this means is that linux often is a step behind windows in terms of drivers. Performance and functionality itself is the same, which seems logical because in OpenGL, the graphics chip does most of the work, and it will not even know which OS the computer it is working in is running on. One thing that might in fact affect performance in “real” programms (not demos but programs that also require some CPU work) ist that as far as I know, the linux c compiler (gcc) does not produce as fast code as the microsoft c++ compiler does, so this might make a difference. it might have been true at some point in the past that gcc was slower than cl, but, my recent experience has been that it handily beats at least the msvc6 version. I took a set of custom cpu-limited benchmarks and ran gcc vs icc (intel on linux), and cl vs icl (intel on windows). The results were that icc produced instructions that performed on par with gcc, but icl handily beat cl by about 10-20% on average. I haven’t run numbers on the msvc7 version, so they might have improved a bit, but it’s probably still in the same ballpark. VC.NET optimization is significantly better than VC6, however I don’t consider that a valid issue when comparing GFX performance. And of course gcc continues to improve.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00034.warc.gz
CC-MAIN-2022-49
2,604
17
https://pcminecraft-mods.com/15-seconds-puzzle-map-for-minecraft/
code
This map is a memory game in which you have 15 seconds to inspect the room. After this time expires, you will need to answer 5 questions that relate to a whole relative to the room itself. So test your memory. How to install 15 Seconds Puzzle - Download map - Unzip it and copy to C:\Users\User_Name\AppData\Roaming\.minecraft\saves
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515041.68/warc/CC-MAIN-20181022113301-20181022134801-00455.warc.gz
CC-MAIN-2018-43
332
4
https://softwareengineering.stackexchange.com/questions/411745/when-to-extract-boolean-conditions-into-their-own-function
code
I commonly use a boolean condition/variable when it's something too big or complicated and takes too much space by itself inside ifs - basically to avoid repeatability and thus improve readability. E.g. has_three_repeated_digits = len(some_number) == 4 and len(set(some_number[1:]) and set(some_number[:-1])) != 1 and ... etc. So it is used as However, I've noticed it is quite common to use boolean-returning functions instead: def has_three_repeated_digits(number): return len(number) == 4 # ... etc And it is used as if has_three_repeated_digits(some_number): ... So, when should I turn my boolean variable has_foo into a function has_foo(bar)? My initial suspicion is that I should do it whenever this variable will be used inside another method, or at a similar situation (as making a boolean variable global would be quite... ugly?). But I'm not sure.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100081.47/warc/CC-MAIN-20231129105306-20231129135306-00104.warc.gz
CC-MAIN-2023-50
857
10
https://www.onmsft.com/news/get-a-free-windows-11-2022-virtual-machine-until-january-2023/
code
If you want to get a look at Microsoft’s latest operating system, or if you’re a developer looking to dabble in building apps for Windows 11 you can download Microsoft’s official virtual machine for free. These virtual machines have been updated with the latest developer bits including Windows 11 2022, Visual Studio 2022, and other utilities. These virtual machines are free until January 10, 2023, after that Microsoft will begin asking you for activation. If you activate it with a genuine key you can continue using the system. You’ll have two months to use the following resources for free. - Window 11 Enterprise (Evaluation) - Visual Studio 2022 Community Edition with UWP, .NET Desktop, Azure, and Windows App SDK for C# workloads enabled - Windows Subsystem for Linux 2 enabled with Ubuntu installed - Windows Terminal installed - Developer mode enabled These Windows 11 2022 virtual machines come in four in four flavors, VMWare, Hyper-V, VirtualBox, and Parallels. You will need at least 20GB of storage available to get started.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506329.15/warc/CC-MAIN-20230922034112-20230922064112-00274.warc.gz
CC-MAIN-2023-40
1,049
8
https://stackoverflow.com/questions/14365056/issues-with-avd-manager-v21-0-1-and-21-1rc2-xxhdpi-not-supported/15244235
code
For the first issue, about the error in parsing after adding xxhdpi devices, try updating the Android SDK Tools to 21.1, this really should solve the problem. For your second issue, making the emulator work with xxhdpi resources, this can be accomplished with a few additional adb-commands after the emulator has started. Taken from an installation troubleshooting page at developer.sonymobile.com (http://developer.sonymobile.com/knowledge-base/sdks/sony-add-on-sdk/install-the-sony-add-on-sdk/), the following statement is available: In the Android SDK Tools version 21.1, the xxhdpi screen resolution for the emulator display is not yet supported. However, you could do a manual override of the LCD properties of the emulator. This will ensure that the correct resources are being dispatched for the Xperia™ Z emulator. To do so, please execute the following in your command line after the Xperia™ Z emulator has completed the boot sequence: adb shell setprop qemu.sf.lcd_density 480 adb shell stop adb shell start The emulator will then restart. After it has rebooted the emulator should use the correct screen density and UI scaling. We recommend you to execute these commends using a batch file or a shell script if you are using these settings often.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213666.61/warc/CC-MAIN-20180818114957-20180818134957-00680.warc.gz
CC-MAIN-2018-34
1,261
7
https://www.onejackdaw.com/exipure-west-covina-california/
code
Exipure West Covina California Exipure is a natural nutritional mix that supports healthy fat burning by converting white fat into brown fat. This supplement utilizes an one-of-a-kind method to get rid of added fat in the body that is otherwise layered as well as makes a person extremely overweight Exipure West Covina California. The conversion of white to brown fat is enabled making use of all-natural components with tried and tested medicinal benefits. Absolutely nothing inside Exipure is obtained from untrusted or artificial resources; as a result, it brings no health risks. This supplement is currently up for sale at an exclusive reduced rate online. Diet tablets are preferred for many factors, however individuals choose making use of diet plan tablets since they desire an effortless weight reduction experience. The common concept of weight reduction involves following a limiting diet plan as well as delighting in strenuous workout. While these two can aid most of the times, there are additionally opportunities for them to be inadequate, as weight gain factors differ in every person. In some cases, even one of the most famous diet regimen plans fall short to function, or it is expensive for individuals to hire an individual instructor or obtain time for workout everyday. On the other hand, making use of diet regimen tablets as well as expecting the body to slim down by itself audios easy, plus diet pills cost less than a weight loss surgical treatment, so people favor to utilize them. Exipure is among the latest enhancements in the diet plan tablets that are chosen nowadays. Regardless of being a new product, it is receiving a cozy welcome, mostly due to the fact that it has actually assisted individuals accomplish their target weight without triggering a financial concern. Yet the threat of trying a new product stays the same, especially for a person who has actually never attempted a dietary supplement before.Exipure West Covina California How to be sure if it is secure to use? What are its components as well as where to buy Exipure? Figure out done in this Exipure evaluation. What Is Exipure? Exipure is a weight-loss supplement made from natural ingredients with clinically proven benefits. It results from years-long study on medicinal plants, hoping to find the very best choices for all-natural fat burning. Exipure West Covina California As plants have actually been used for countless years in numerous treatments, researchers think some of them can also aid against obesity. In this attempt to find these plants, they thought of 8 exotic active ingredients, each contributing in dropping undesirable fats. As pointed out on exipure.com, this supplement functions similarly well on men, women, and individuals who identify themselves aside from this binary classification. It is a non-prescription formula, but just those who are 18 years as well as over can utilize them. Exipure comes in capsule form, as well as there are 30 of them in each bottle. This one container is to be consumed in one month, ideally, and the best results are observed within two to three months. Although the supplement sector has plenty of similar items, weight management with Exipure is unique. It functions by transforming the common white fat to brown fat, also called brownish adipose fat. The natural components inside this supplement aid in this conversion, and the body sheds a lot of calories throughout this conversion. One of the most important and also unique top quality of Exipure is that it goes inside the body and targets the major source of weight gain. It functions to enhance metabolism as well as control tension and also inflammation inside the body. Furthermore, Exipure is a US-made item prepared in an FDA-approved and also GMP-certified facility. The final product is checked with a third-party research laboratory for quality as well as security. There are least opportunities of it going wrong and bringing up an undesirable impact. Continue reading to recognize more about Exipure action, components, and also pricing. Exipure Ingredients Listing The official website of Exipure discusses 8 unique ingredients inside this formula. These components are selected after going through numerous researches on each, verifying them an ideal choice for this formula. They are gotten from numerous areas, and also there is no specific information on each ingredient’s place specifically. Here is a listing of all Exipure components and also their results on the body. - Perilla: the first name in Exipure active ingredients is perilla, additionally called beefsteak plant. There are so many researches validating its impact on cholesterol levels, as it balances the HDL and also LDL levels and aids in brown fat formation. Some of its compounds additionally use cognitive advantages and also enhance brain-to-body coordination. - Divine Basil: following is Holy Basil, a component with tested medical advantages. It relieves anxiety, and inflammation, both biggest triggers of a slow-moving metabolic process. Exipure West Covina California It likewise clears the body from contaminants, waste materials, as well as mobile wastage, maintaining perfect metabolic conditions for the body. - White Oriental Ginseng: Exipure tablets also contain Panax ginseng or Oriental ginseng, which supplies unequaled power to the body. This energy aids the body run its functions in spite of slimming down, and there is no tired or weak sensation experienced by the body. - Amur Cork Bark: not as popular as various other components, but amur cork bark uses metabolic benefits that make weight management easy. Exipure West Covina California It eases bloating, diarrhea, cramps, nausea or vomiting, unwanted gas, and also various other conditions that prevail in overweight people. - Quercetin: Next on this listing is quercetin, an active ingredient offering benefits for blood pressure, heart health, and also vessel wellness. Some researches additionally show its role in improving resistance, delaying aging, and revitalizing body cells, maintaining them young for a long period of time. - Oleuropein: often referred to as Olea Europaea, oleuropein shrinks the fat cells, helping them transform to brownish fat while shedding a lot of energy used to fuel cellular activities. It additionally improves cholesterol degrees, blood pressure, sugar degrees, and lipid profile, avoiding several wellness problems. - Berberine: an additional name in the Exipure active ingredients checklist is berberine, which is filled with anti-inflammatory anti-oxidants. It assists clear the body from toxic substances, eliminating free radicals and mobile wastes that often impede metabolic rate. It supports healthy and balanced digestion, and with quercetin, it thaws much more fat in much less time. - Resveratrol: the last name in Exipure active ingredients is resveratrol, an antioxidant frequently found in grapes. It offers a number of wellness benefits, one of which is to reduce cholesterol degrees, avoid plaque formation, and also clear toxins. All these ingredients are obtained from pure quality resources, and also absolutely nothing among them can trigger any negative effects in the body. Read what Exipure evaluates from consumers and their stunning discoveries have to say concerning this supplement. Exipure West Covina California Is it actually worth spending money on? Look into this in-depth report which will impress you. Exactly how Does The Exipure Formula Job? Acquiring weight has become much easier as a result of the altered lifestyle as well as nutritional routines. Not just grownups, yet younger and also older individuals are additionally targets of weight problems now, and also these fads are enhancing every year. Exipure West Covina California The wellness experts are exceptionally concerned over these weight problems patterns, recommending people move to a much healthier way of life. However it is typically not an option, as well as for one reason or another, individuals tend to try to find shortcuts to make it occur. Exipure is a weight-loss nutritional formula developed with metabolic-boosting herbs. According to the firm, it aids change the white fat cells to brownish fat, making them better and also healthy for the body. There is a great deal of clinical evidence suggesting BAT is related to excessive weight. The makers of the Exipure weight reduction supplement have actually utilized this information and also created a formula that uses natural components to raise brown adipose tissue degrees. For individuals who do not know about brownish fats, it is a kind of fat that only triggers when the climate is cold. It thaws, supplying warm to the body, that makes cold temperature level bearable for it. Do not confuse this brownish fat with the routine fat, additionally called white fat, as it brings extra mitochondria in its cells, making this fat thaw more power release. This procedure burns a large number of calories, maintaining the body heated, invigorated, and also inducing weight-loss. Where To Purchase Exipure? Cost, Discount Rate and Refund Policy Exipure is presently in supply as well as available for instant shipments. The only way to get your hands on this supplement is with its official web site (exipure.com), as it is not available anywhere else. You can place the order online, directly, as well as your order will reach your doorstep within a couple of days. Do not rely on any online or regional vendor marketing Exipure supplement for weight reduction. Exipure West Covina California The firm has no partners, and there are high possibilities of various other companies utilizing its name to sell their phony items. Constantly choose the official web site over the random online shops to make your acquisition. The actual cost of the Exipure supplement is almost $200, yet it has actually reduced it to $59 just, as a part of promotions, to ensure that increasingly more individuals can find out about it. Below are the total prices details. - Obtain one bottle of Exipure (1 month supply) for $59.00 only (Plus distribution fees). - Get three bottles of Exipure (90 days supply) for $49.00 per bottle (And also distribution costs) + Bonus offer things. - Obtain six bottles of Exipure (180 days supply) for $39.00 per container (Free delivery) + Bonus offer products. Get Exipure at a Price Cut: From The Official Web Site Right Here Although it is much better to order only one bottle initially and order more later on, after using it for a few days. But Exipure might not be available all the time, as it is a prominent item with high need. The business can only produce a restricted stock, and restocking could take a few months. Consequently, it is far better to order three or 6 bottles to start a fat burning trip. You can always buy more bottles when readily available as well as proceed utilizing them for as long as you require. It is additionally important to note that Exipure Australia, NZ, Canada, UK, as well as clients from other nations around the world should likewise put their orders on the main web site pointed out over. Exipure Reimbursement Policy. Exipure features a 100% satisfaction warranty as the company prepares to reimburse the full order value, if this item stops working to meet your expectations. There is no minimum or optimum to get this offer and all orders purchased with the main internet site will immediately belong of this refund plan. The business concerns client satisfaction as its top concern as well as prepares to birth a loss, if Exipure stops working to meet its pledges. The time needed to get this reimbursement is 180 days, or six months, that is enough to judge this supplement. The company has an energetic consumer support team ready to help new as well as existing users. Get in touch with the customer care to recognize the process of refunds. Also, do not discard your made use of or vacant Exipure containers, as you may be asked to send them back to the business as a receipt. Do not trust sources apart from the official site to obtain your Exipure order as this reimbursement policy does not apply on bottles bought from unapproved resources. The refund demands gotten after passing this 180-day limit will certainly be rejected by the company, so keep a track of this time. Directions To Utilize Exipure Supplement. Using Exipure is no various than making use of multivitamins, and also you just need a glass of water to eat it. The day-to-day dose is only one capsule, as well as taking a higher dosage is purely restricted. Exipure West Covina California There is no set time to take this day-to-day dose, and you can take it whenever of the day. Nonetheless, it is better to repair a time to consume it to make sure that you do not neglect or miss the day-to-day dosage. The results are evident within 3 to 6 months, yet it can be made use of for longer than 6 months, too, as it has no adverse effects to supply. Though private results might differ, Exipure is for everyone, regardless of weight, however very obese people may take greater than 6 months to reach their target weight. Absolutely nothing inside Exipure has a habit forming capacity or withdrawal impact, and also you can utilize it over and over without worrying about anything. Others that are just a couple of extra pounds over their target weight will just see adjustments in a couple of weeks. The time required to show the results relies on the body’s capacity to reply to different components, as well as no 2 bodies share the exact same functions. Do not make use of Exipure if you are unsure concerning its use, or talk to a medical professional to learn more about supplement security. Exipure Reviews – The Verdict. To sum up, Exipure appears to be a potent weight loss supplement, with straight benefits for the metabolism. It utilizes a healthy means to reduce weight, which is why the outcomes of the Exipure supplement are longer and better. As a matter of fact, they continue to be the very same after you stop using the supplement and also preserve the outcomes with fundamental dietary modifications as well as exercise. All orders are shielded with a 180-day money-back guarantee, during which you can choose to get a refund of your order if it stops working to assist you in weight management. Do not take even more time since Exipure is marketing quick, and also there are only minimal containers left. Check out the main website to validate your order before the supply sells out. What is The Right Time to Take In Exipure? There is no standard time to use this supplement, as well as the user can take it according to his convenience. The firm makes certain there is no sedative active ingredient; as a result, Exipure does not influence the sleeping cycle. Exipure West Covina CaliforniaNonetheless, it is best to take it during the initial half of the day so that it has all the time to turn on BAT conversion. Is Exipure Suitable For Everyone? Based upon the info shared online, Exipure has a 100% natural formula without GMO ingredients, soy, and gluten in it. It is additionally devoid of unnecessary chemicals, fillers, binders, and also chemicals that might make an item unsuitable for long-lasting usage. The formula is best for people in their middle ages, taking care of excessive weight with no time to diet plan preparation or workout. Just How Much Weight Can One Lose with Exipure? The quantity of weight reduction can be various for different individuals, and also there is a standard for this weight-loss. One can shed more weight in much less time by consuming Exipure diet pills in a healthy and balanced, low-calorie diet plan and also an active way of living. Can You Get An Allergy From Exipure? Exipure has no risk of negative effects as well as allergies, as well as it is unusual to have allergic reactions with medical ingredients, in general. There are no grievances from the users, and no user reported an allergy after utilizing this formula. If a person has a history of food-borne allergic reactions, it is far better to talk with a doctor beforehand for a risk-free weight management experience. Exactly How To Call Exipure Company? The business has an active customer assistance line to promote new and existing clients. Exipure West Covina California All the orders are protected with a 180-day money-back offer that can be availed by contacting the group. Right here is just how to call them. Domestic Phone Calls: 1 (800) 390 6035. International Calls: 1 (208) 345 4245.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00235.warc.gz
CC-MAIN-2023-06
16,571
63
https://xenforo.com/community/threads/using-xenforo-with-digital-access-pass.49529/
code
Sadly (perhaps not sadly!), I don't want to use vBulletin. I'm wondering if there is a way I can achieve the same or similar thing with Xenforo? Or at least I'd like to get a picture of the ways I could sell/control access to the forum. DAP has a plugin system: I'm not a programmer, but from I can see, it would allow a notification to be sent to a 3rd party when a user signs up for a product. I think the next version of DAP will also allow a notifcation to be sent when a user cancels/doesn't pay. I see than Xenforo has XenAPI - if DAP is able to send a notification, could a user be created/updated and given access to a specified forum? I sort of feel that some of this is beyond me, so I'm wondering if anyone has done it. I'm guessing using this combo I wouldn't get the automatic log in/log out that DAP/Vbulletin integration offers. (Sadly), I feel vBulletin might still be the best option for me in these particular circumstances. Any suggestions or insights are appreciated!
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106754.4/warc/CC-MAIN-20170820131332-20170820151332-00472.warc.gz
CC-MAIN-2017-34
987
7
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aspittel/moving-past-tutorials-pseudocode-13a6/comments
code
Love all the different examples. And the reminder to consider edge cases. It's easier to write pseudo code for an algorithms problem than for an application, showing both is a great way to see how you get from here to there :) Interesting! I think it's really important to learn how to get past the blank editor problem - there's nothing more intimidating than a blank sheet of paper. I've not tried doing it with pseudocode. I rely on writing a test first - identifying one behaviour of the program I want to write and imagining that I've already written it. What does it look like? What will it do? What function gets called? With what? What will it return? For me, a test gets me solving one small problem, gets something down on the page and stops me fretting about everything else that's going on in the code/in my life/ on TV. Does your pseudocode have an afterlife? Does it be become comments or do you tend to bin it once it's been replaced by code? I think that's a really natural progression, and I normally write function declarations or tests first too. But, for beginners/jr devs, I think writing more traditional pseudocode is super helpful. It depends on the program, but I am normally minimalist on comments for simple code, maximalist on longer form documentation. for your afterlife question, as it was recommended by steve mcconnell( software engineering expert ) Once the pseudocode is written, you build the code around it and the pseudocode turns into programming-language comments. This eliminates most commenting effort. As a result the comments will be complete and much more meaningful. I really like this write up Ali. I’ve done pseudo coding without knowing it actually had a term 😂. Mines is not as organized like your examples above, it’s more scatter brain but I try to outline what I may need to accomplish a task in bullet points or sentences. Nice, I've always loved writing pseudo code as a starting point for programs or functions, usually just a piece inside the comments section, which really serve as a focus and guideline for what needs to be done. (And proof that you know what to do :)) Maybe one day all code will be written in pseudo code...mmm Pseudo code is the first thing I've learn when I started my software programming course in school it's so useful for later! Interestingly, way back in the Jurassic of computing (1970s) we were taught to always begin with pseudo-code; this was primarily because the means of getting the computer read our programs was punch cards, but it still makes sense. great article, couldn't agree more. pseudocode is definitely a very important practice, a by-product of which are great comments in the code. if you write a concise version of pseudocode inside the code and then add the relevant code beneath it, you've just got yourself comments that will help you (or anyone else) who will need to maintain this bit of code in the future Thanks for reminding us this basic but powerful concept! Good bye blank text editor :-) Such a great series! This post is a game changer for me. Thanks Ali ! I love it! I write articles and focus on the problem solving part rather than a specific programming/scripting language. I think it's pragmatic for solving leetcode. I'd try it when I challenge them. I am never getting a dog We're a place where coders share, stay up-to-date and grow their careers. We strive for transparency and don't collect excess data.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677230.18/warc/CC-MAIN-20191017222820-20191018010320-00535.warc.gz
CC-MAIN-2019-43
3,439
25
https://openclipart.org/detail/4087/abstracted-personal-stress-appraisal
code
abstracted personal stress appraisal by cibo00 - uploaded on April 26, 2007, 11:57 am The thin line around each circle represents what you notice first (readily detectable characteristics). The inside of each circle means all deep-level characteristics someone has (e.g., personality, self-esteem, attitudes, education, skills, etc.). The arrows surrounding the circles represent sources of personal stress (e.g., fears, performance, peer-pressure, socio-economical issues, uncertainties, etc.). Anyone is free to interpret the meaning of the size and color of each arrow, because people differ on what they perceive as a source of personal stress. Further, depending of the personal stress you are enduring, it may affect others (bowed red thin arrow). Though, it may be reduced by social support (bowed green arrow). The green arrow makes a difference in reducing the size of all other arrows.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00067.warc.gz
CC-MAIN-2022-49
895
4
http://www.reference.com/browse/Sodipodi
code
is a computer vector graphics editor . The main author is Lauris Kaplinski , and several other people have contributed to the project. The project is no longer under active development , but development continues on Inkscape , a 2003 fork of Sodipodi. Sodipodi itself started as a fork of Gill , a vector-graphics program written by Raph Levien The primary design goal of Sodipodi is to produce a usable editor for vector graphics, and a drawing tool for artists. Although it uses SVG as its native file format (including some extensions to hold metadata), it is not intended to be a full implementation of the SVG standard. Sodipodi imports and exports plain SVG data, and can also export raster graphics in PNG format. The user interface of Sodipodi is a Controlled Single Document Interface (CSDI) similar to The GIMP. Sodipodi is available for Linux and Microsoft Windows. The latest version is 0.34, released on 11 February 2004. Released under the GNU General Public License, Sodipodi is free software. Sodipodi started a collection of SVG clip art containing symbols and flags from around the world. This work helped inspire the Open Clip Art Library. Inkscape is a fork of Sodipodi founded in 2003 by some Sodipodi developers with different goals, including redesigning the interface and closer compliance with the SVG standard.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704131463/warc/CC-MAIN-20130516113531-00089-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,336
11
https://percussioncmscommunity.intsof.com/t/cm1-is-adding-to-my-javascript-can-i-stop-it-from-publishing-this-with-my-code/542
code
In my search box I have the below code and the CDATA that is published with my site is preventing the search from working. Can you please let me know how to stop this from publishing with these pages? /* // ]]> document.write(’ ');]]> // ]]> /*]]>*/ Can you tell me what version of CM1 you are running (select the “?” icon in the upper right of the UI, and then hit Help)? I ask because there is a known issue with version 2.13 of CM1 where our addition of managed links in HTML widgets causes certain scripting code to auto-format incorrectly. This has been fixed in version 3.0. Is your search box code within an HTML or Rich-Text widget? If it’s in a Rich-Text widget, I would recommend moving it to an HTML widget, where the code will not be wrapped by CDATA tags. Hey Jon: Thanks that worked! Great, glad to hear.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100873.6/warc/CC-MAIN-20231209071722-20231209101722-00088.warc.gz
CC-MAIN-2023-50
826
7
https://wflow.readthedocs.io/en/stable/wflow_delwaq.html
code
The wflow_delwaq module provides a set of functions to construct a delwaq pointer file from a PCRaster local drainage network. A command-line interface is provide that allows you to create a delwaq model that can be linked to a wflow model. The script sets-up a one-layer model (representing the kinematic wave reservoir). Water is labeled according to the area and flux where it enters the kinematic wave reservoir. For the script to work a run of the wflow model must be available and a template directory in which the delwaq model is created should also be available. These are indicated by the -C -R and -D command line options. The -R and -C options indicate the wflow case and wflow run directories while the -D option indicates the delwaq template directory. The template used is shown below: debug/ fixed/ fixed/B2_numsettings.inc fixed/B4_dispersion.inc fixed/B4_dispx.inc fixed/B9_Hisvar.inc fixed/B9_Mapvar.inc includes_deltashell/ includes_flow/ run.bat dlwqlib.dll libcta.dll libiomp5md.dll waq_plugin_wasteload.dll delwaq1.exe delwaq2.exe deltashell.inp The debug, includes_flow, and includes_deltashell directories are filled by the script. After that delwaq1.exe and delwaq2.exe programs may be run (the run.bat file shows how this is done): delwaq1.exe deltashell.inp -np delwaq2.exe deltashell.inp the script sets up delwaq such that the result for the wflow gauges locations are stored in the deltashell.his file. How the script works¶ The pointer file for delwaq is made using the following information: The wflow_ldd.map files is used to create the internal flow network, it defines the segments and how water flows between the segments The number of inflows into each segment determines is taken from the sources mapstacks (-S option). Together these sources should include all the water that enters the kinematic wave reservoir. These are the red and green arrows in the figure below The delwaq network is generated for the area define in the wflow_catchment map. The included area is define by all cells were the catchment id in a cel is larger than 1. Within the includes_flow directory the following files are made: volume.dat - volumes (N+1) noseg flow.dat - flows (N). Contents is noq area.dat - N timesteps. Content is noq surface.dat - surface area of the water per segment (N+1), noseq length.dat - One timestep only (constant). Content is two times noq Here nrseg is the number of segments (taken from the non-missing grid cell in the wflow_ldd.map file) and noq is the number of exchanges which is calculated as the number of segments plus number the of inflows (in each segment) times the number of segments Delwaq expects volumes to be instantanious values at the start of a timestes while flows are integrated between tow timesteps. For volumes N+1 timesteps are needed, for flows N timesteps. The figure below demonstrates this principle for N=4. The volume.dat file is filled with N+1 steps of volumes of the wflow kinematic wave reservoir. To obtain the needed lag between the flows and the volumes the volumes are taken from the kinematic wave reservoir one timestep back (OldKinWaveVolume). The flow.dat files is filled as follows. For each timestep internal flows (within the kinematic wave reservoir, i.e. flows from segment to segment) are written first (blue in the layout above). Next the flow into each segment are written. Depending on how many inflow types are given to the script (sources). For one type, one set of flows is written, if there are two types two sets etc (green and red in the layout above). Very simple example:¶ The following very simple example demonstrated how the pointer file is created. First the pcraster ldd: The resulting network consist of 10 points: As can be seen both 9 and 10 are bottom points. The generated pointer file is shown below: ;Written by dw_WritePointer ;nr of pointers is: 20 1 3 0 0 2 4 0 0 3 5 0 0 4 6 0 0 5 7 0 0 6 8 0 0 7 9 0 0 8 10 0 0 9 -1 0 0 10 -2 0 0 -3 1 0 0 -4 2 0 0 -5 3 0 0 -6 4 0 0 -7 5 0 0 -8 6 0 0 -9 7 0 0 -10 8 0 0 -11 9 0 0 -12 10 0 0 Case study for Malaysia and Singapore¶ To estimate load of different nutrients to Johor strait two wflow_sbm models have been setup. Next these models where linked to delwaq as follows: A delwaq segment network similar to the wflow D8 ldd was made The volumes in the delwaq segment are taken from the wflow_sbm kinematic wave volumes For each segment two sources (inflows) are constructed, fast and slow each representing different runoff compartments from the wflow model. Fast represents SOF 1, HOF 2 and SSSF 3 while Slow represent groundwater flow. Next the flow types are combined with the available land-use classes. As such a Luclass times flowtypes matrix of constituents is made. Each constituent (e.g. Slow flow of LU class 1) is traced throughout the system. All constituents are conservative and have a concentration of 1 as they flow in each segement. To check for consistency an Initial water type and a Check water type are introduced. The Initial water will leave the system gradually after a cold start, the Check water type is added to each flow component and should be 1 at each location in the system (Mass Balance Check). SOF: Saturation Overland Flow HOF: Hortonian Overland Flow (or infiltration excess Overland Flow) SSSF: SubSurface Storm Flow. Rapid lateral flow through the top part of the soil profile. The above results in a system in which the different flow types (including the LU type where they have been generated) can be traced throughout the system. Each each gauge location the discharge and the flow components that make up the discharge are reported. By assuming each flow type is an end-member in a mixing model we can add fixed concentration of real parameters to the flow fractions and multiply those with the concentrations of the end-membesrt modelled concentration at the gauge locations can be obtained for each timestep. The figure above shows the flow types in the models used in Singapore and Malaysia. Groundwater flow (both from completely saturated cell and subcell groundwater flow) makes up the Slow flow that is fed into the delwaq model while SOF and HOF make up the Fast flow to the delwaq model. In addition the water is also labelled according to the landuse type of the cell that it flows out off. The whole procedure was setup in a Delft-FEWS configuration that can run the following steps operationally: wflow_delwaq module documentation¶ Simple export library for pcraster/python delwaq link. The module can be used to export an ldd to a delwaq pointer file and fill the input arrays. The library includes a command-line interface that allows you to setup a delwaq model and feed it with forcing data. This is an experimental version. the wflow run should have saved at least the folowing mapstacks: - self.OldKinWaveVolume=vol - self.WaterLevel=lev - self.SurfaceRunoff=run - self.Inwater=inw (or the different components that make up this flux) The script always sets-up at least two Substances, Initial and Check. Initial is present everywhere at startup and the concentration is zero in all inputs. Check is not present at startup and set to 1 in all inputs. The script takes an areamap that can be used to tag water as it enters the model. This can be a landuse map, a subcatchment map etc. Furthermore water can also be tagged based on the flux into the model. The naming of the sustances is as follows: “Area” areamap_class inflow_number Command line options: -C: caseDir - set the wflow case directory to use -R: runId - set the wflow runId to use -T: Set last timestep -O: set starttime ('%Y-%m-%d% H:%M:%S') -a: Also write dynamic area data if this option is set -j: if this option is set the static data is not generated (default is on) -A: sets the areamap used to specify the fraction sources. This can be a subcatcment map, a soil type map, a land use maps etc. Default is: staticmaps/wflow_subcatch.map (relative to the caseDir directory) -D: delwaqdir - set the basedir to create the delwaq schematisation in -S: sourcemap - name of the wflow output map to use as source. it should be a variable that flows into the kinematic wave routine inw is normally used as it contain all water per cell that flows into the kinematic wave function. Use multiple -S options to include multiple maps -s: Set the model timesteps in seconds (default 86400) -c: Name of the wflow configuration file -n: Name of the wflow netCDF output file, expected in caseDir/runId/. If not present, mapstacks will be used. add support for a coarser delwaq network based on supplied map. Test option to seperate construction of network from filling of the input arrays Ad support to not only follow the kinematic wave reservoir but also the flow trough the soil reservoirs. Basically make three layers: kinematic wave reservoir (surface water) unsaturated store (only vertical flow) saturated store (horizontal and vertical flow) ” create the dir to save delwaq info in Returns number of cells in 1st and 2nd grid directions. input: - ptid_map : PCRaster map with unique id’s Generates a Delwaq atributes (*.atr) file. input: - fname : file name to write to - noseg : number of delwaq segments dw_WriteBndFile(fname, ptid_map, pointer, pointer_labels, areas, source_ids)¶ Writes Delwaq *.bnd file. input: - fname : output file name (without file extension) - ptid_map : PCRaster map with unique id’s - pointer : delwaq pointers - pointer_labels : numpy array with pointer types - areas : area id per inflow - source_ids : list of source names A unique boundary is generated per source for all segments in a given area. A unique boundary is generated for each outflow. dw_WriteBoundlist(fname, pointer, areas, inflowtypes)¶ Writes the boundary list file B5_boundlist.inc Numbering is abs(exchnage id) add labeling of different inflows ( the information is already present) Generates a Delwaq *.hyd file. - d is dict holding all the required data: d[‘runid’] : current run id d[‘tref’] : reference time of simulation as datetime d[‘tstart’] : start time of simulation as datetime d[‘tstop’] : stop time of simulation as datetime d[‘tstep’] : timestep of simulation as timedelta d[‘m’] : number of grid cells in 1st direction d[‘n’] : number of grid cells in 2nd direction Writes the number of exchnages to file (number of rows in the pointer file) Writes the number of segments to B3 file dw_WritePointer(fname, pointer, binary=False)¶ WRites the pointer file B4_pointer.inc dw_WriteSegmentOrExchangeData(ttime, fname, datablock, boundids, WriteAscii=True)¶ Writes a timestep to a segment/exchange data file (appends to an existing file or creates a new one). time - time for this timestep fname - File path of the segment/exchange data file</param> datablock - array with data boundids to write more than 1 block WriteAscii - set to 1 to alse make an ascii dump dw_WriteWaqGeom(fname, ptid_map, ldd_map)¶ Writes Delwaq netCDF geometry file (*_waqgeom.nc). input: - fname : output file name (without file extension) - ptid_map : PCRaster map with unique id’s dw_Write_B2_outlocs(fname, gauges, segs)¶ Write an output loc file based on the wflow_gauges map. Writes the B1_sublist.inc file input: it writes substances for the areas and an initial and mass balance check substance dw_Write_Times(dwdir, T0, timeSteps, timeStepSec)¶ Writes B1_T0.inc, B2_outputtimers.inc, B2_sysclock.inc and /B2_simtimers.inc Assumes daily timesteps for now! dw_mkDelwaqPointers(ldd, amap, difboun, layers)¶ An ldd is used to determine the from-to relations for delwaq using the PCraster up/downstreams commands. amap is used to link boundaries to the segments for delwaq (negative numbers). These are area based boundaries. Diffboun is a python dictionary with inflows for each cell. map to determine the active points) difboun : number of inflow boundaries per cell layers [nr of soil layers (only vertical flow)]. Only one layer at present (layers must be 1) pointer, fromto, outflows, boundaries, segment matrix with 4 colums: from to, zero, zero. use savetxt(“pointer.inc”,pointer,fmt=’%10.0f’) to save this for use with delwaq The pointers list first contains the “internal” fluxes in the kinematic wave reservoir, next the fluxes (1-n) into the kinematic wave reservoir. Add exta column with boundary labels (of the inflows) Converts a pcrmap to a numpy array.that is flattend and from which missing values are removed. Used for generating delwaq data read_timestep(nc, var, timestep, logger, caseId, runId)¶ Returns a map of the given variable at the given timestep.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148850.96/warc/CC-MAIN-20200229083813-20200229113813-00285.warc.gz
CC-MAIN-2020-10
12,613
113
https://hpc.fau.de/2017/11/17/open-source-architecture-code-analyzer-osaca-released/
code
Open Source Architecture Code Analyzer (OSACA) has been released The Open Source Architecture Code Analyzer (OSACA) is a tool that can analyze assembly or machine code and produce a best-case (throughput-limited) runtime prediction assuming that the data is in the L1 cache. Such a tool is sorely needed for processor architectures other than Intel’s. Intel provides the Intel Architecture Code Analyzer (IACA) for free, but it is not open source and its future development path is unclear. Why such a tool? Analytic performance models, such as the ECM model, depend on an accurate assessment of in-core execution performance. You can either do that manually by code (source or assembly) inspection, or you can use a tool that knows the instruction set and the limitations of a particular microarchitecture. The data flow analysis must be done by someone else – again, it’s either your brain or, e.g., our Kerncraft tool. Jan Laukemann, our bachelor’s candidate, has taken on the tremendous task of developing the initial version of an IACA replacement, and OSACA is the result. It is downloadable from github: https://github.com/RRZE-HPC/osaca. We haven’t even bothered to give it a version number yet, so this is definitely in alpha stage, but it is usable and can do some things that IACA can’t, such as, e.g., analyze non-compiled assembly code or extend its own database with new instructions. Feedback is encouraged and welcome.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474541.96/warc/CC-MAIN-20240224144416-20240224174416-00212.warc.gz
CC-MAIN-2024-10
1,446
5
http://forums.linuxmint.com/viewtopic.php?p=711049
code
I've been running Linux Mint for about 3 years as a dual boot on my older computer (with Windows XP) and am in the proces of switching to LMDE exclusively. Changed to LMDE over a year ago. My Question: What is the "Recovery Mode" on the boot screen and when and how, when ,and if should I use it? I've never really had any serious problems with LMDE and have been running it for well over a year. Only real problems are trying to get two of my Windows XP programs to run, but LMDE otherwise has never given me any trouble. tlcmd (aka Dick)
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152959.66/warc/CC-MAIN-20160205193912-00226-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
539
4
http://freecode.com/tags/mac-os-x?page=1&sort=vitality&with=2841&without=1133
code
tkbiff allows arbitrary commands to be executed upon mail reception. If you like programs such as xbiff and xbiff++ but wish they were more flexible, then you'll like tkbiff. Unlike other biffs, tkbiff is fully customizable. tkbiff also doesn't waste your valuable screen space with icons; instead, it shows you the mail itself. It supports UNIX, Mac, and Windows, IMAP, POP, and UNIX-style mail files, and SSL and APOP. "Stupid Simple VPN" is a virtual private networking program contained as source in a 7.0KB tarball. It does not daemonize, and is intended to create a temporary VPN for emergency circumstances when you "just need a pipe" without the hassle of authentication, keys, encryption, and compression. It runs on OS X using the tuntaposx driver and Linux 2.6 using the native tuntap driver. The National Space Science Data Center's (NSSDC) Common Data Format (CDF) is a self-describing data abstraction for the storage and manipulation of multidimensional data in a platform- and discipline-independent fashion. It consists of a scientific data management package (known as the "CDF Library") that allows programmers and application developers to manage and manipulate scalar, vector, and multi-dimensional data arrays. JobRunner is a tool designed to run a job (i.e., to execute a commandline or script) which does not require user interaction, and to store the program's output (standard error and standard out) in an easy-to-read log file. It is able to save a configuration file of the job to be run, which can be used by JobRunner_Caller as a lightweight queueing system. See the Primer included in the archive for extra details on usage examples. Moosic is a music player that focuses on easy playlist management. It consists of a server process that maintains a queue of music files to play and a client program which sends commands to the server. The server continually runs through its playlist, popping items off the top of the list and playing each with an external program. The client is a simple command-line utility which allows you to perform powerful operations upon the server's queue, including the addition of whole directory trees, automatic shuffling, and item removal according to regular expressions. The server comes configured to play MP3, Ogg, MIDI, MOD, and WAV files. BitTorrent is a tool for copying files from one machine to another. FTP punishes sites for being popular. Since all uploading is done from one place, a popular site needs big iron and big bandwidth. With BitTorrent, clients automatically mirror files they download, making the publisher's burden almost nothing.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164836485/warc/CC-MAIN-20131204134716-00087-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
2,620
6
https://forums.tumult.com/t/install-web-app-on-surface-pro-3/9741
code
All I want is to install the webapp as a desktop icon. This is easy to do on an iPad. You just view the webapp through the browser and then the browser can save a desktop link to the app and it can run offline. So how do I get an icon onto a screen of a surface pro so someone can just click on it and it runs the webapp? I was expecting Adobe edge to be able to do the same as safari. Apparently just keeping it as a favourite with edge doesn’t cache the app. Sadly I don’t have a surface pro. I’m reluctant to provide a link because it’s a private, pre-sales financial app. Thanks for the heads-up.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00724.warc.gz
CC-MAIN-2018-09
608
4
https://computergraphics.stackexchange.com/questions/407/changing-image-so-it-would-look-like-through-colorful-glasses
code
I am currently working on some simple pixel shader in HLSL. I send to shader texture and I want to make it more colorful (something like in the picture below). In the picture 1 there is original texture. Picture 2 shows an effect that I want to achieve. Is there some mathematical formula to do that? My input is the RGBA value of each pixel. EDIT: I'll try to write more concrete. Let's say I want to make that garden texture more red. I suppose that what I need to do is: OutputR = InputR * X, OuputG = InputG * Y, OutputB = InpputB * Z But how do I find X, Y and Z?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00739.warc.gz
CC-MAIN-2023-50
568
6
http://lua-users.org/lists/lua-l/2005-01/msg00098.html
code
[Date Prev][Date Next][Thread Prev][Thread Next] - Subject: RES: FW: srlua extension to run with glued zip file - From: André de Leiradella <leiradella@...> - Date: Wed, 5 Jan 2005 17:02:01 -0300 >> I've now a basic extension to srlua running which allows you to >> combine multiple (lua-) files in one zip file and than start it as a >> standalone executable. > Great idea (if the amount the Lua code -- and you can precompile it, > course, to save space -- compresses is larger than the size of the > Another way to do this on Windows would be to embed the "glued" codes > as resource entries (so they are inside the EXE proper) and then to > UPX the whole file. (UPX is a run-time in-memory unpackaging system. > recommended to impress your friends when showing them how small your > Lua run-time is compared to their Perl or Python distro. The UPX > decompressor is very small.) > Has anyone tried using UPX for this (and is there an analogous way to > embed binary data into ELF binaries so that the original executable is > unaffected but the embedded data in technically "inside" as far as > exectuable packers are concerned)? Not tried it, but as long as Windows resources are concerned UPX can # upx -h Options for win32/pe & rtm32/pe: --compress-exports=0 do not compress the export section --compress-exports=1 compress the export section [default] --compress-icons=0 do not compress any icons --compress-icons=1 compress all but the first icon --compress-icons=2 compress all but the first icon directory --compress-resources=0 do not compress any resources at all --strip-relocs=0 do not strip relocations --strip-relocs=1 strip relocations [default] I made a library some time ago called luareader (http://www.geocities.com/andre_leiradella/#luareader) that can read from FILE pointers, file descriptors and from memory. It allows on-the-fly decompression of the streams using zlib or bzip2. In fact, it has a reader that can decide if the underlying stream where data is located is uncompressed, compressed with zlib or compressed with bzip2 and act accordingly. It already has a reader that makes some substitutions to Lua source code to make OOP in Lua easier, translating sequences of @id (field access), @id(...) (method call) and @@id(...) (inherited method call) to user-defined Lua code. But take care, this reader is a hack. I have a unfinished 2.0 version of luareader that can process files inside a tar file. The tar file can reside in the file system, in memory or even inside another tar file. Andre de Leiradella
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254751.58/warc/CC-MAIN-20190519101512-20190519123512-00254.warc.gz
CC-MAIN-2019-22
2,542
45
https://forum.openwrt.org/t/block-all-website-url-except-some-whitelisted/29099/10
code
I have tried this, but I can access to any domain, didn't work Which dns backend did you use? You need to manually reference this special block list in your config! E.g. for dnsmasq (/etc/config/dhcp in 'dnsmasq' config section): option serversfile '/tmp/adb_list.jail' I have added the string to dnsmasq config and it works, but when I reboot the router the config delete my strings and didn't work, this is my config file before reboot config dnsmasq option domainneeded '1' option localise_queries '1' option rebind_protection '1' option rebind_localhost '1' option local '/lan/' option domain 'lan' option expandhosts '1' option authoritative '1' option readethers '1' option leasefile '/tmp/dhcp.leases' option resolvfile '/tmp/resolv.conf.auto' option nonwildcard '1' option localservice '1' option serversfile '/tmp/adb_list.overall' option serversfile '/tmp/adb_list.jail' When reboot the last line is deleted, I try even to replace "overall" with "jail" but on reboot same problem it was never intended to run both lists in the same dns instance. Please provide the output of adblock runtime information - adblock_status : enabled - adblock_version : 3.5.2 - overall_domains : 0 (normal mode) - fetch_utility : /bin/uclient-fetch (-) - dns_backend : dnsmasq (/tmp) - last_rundate : 15.01.2019 10:34:23 - system_release : GL.iNet GL-AR150, OpenWrt 18.06.1 r7258-5eb055306f This is after reboot with only .overall file As said before, adblock doesn't support both list types in the same dns instance. Therefore it might be the best way to disable adblock and maintain your 'jail' list manually. For example: - disable adblock ( - create a new 'jail' list in your /etc directory ( - edit this file and add the following content: - reference this newly created file in your dhcp config like before - reboot & check ... only google.com should be available ... Hope this helps! @dibdot This didn't work for me. I am not able to find anything on google search about this new config option (serversfile) in /etc/config/dhcp. dnsmasq restart log also doesn't leave any trace about this option, whether it read the file or not etc. Edit: probably that option serversfile is same as --servers-file. However it doesn't seem to be working. Sites which are not in this file are also accessible. The 'serversfile' option ('servers-file' directive in dnsmasq) has been added to OpenWrt round four years ago with this commit: https://git.openwrt.org/?p=openwrt/openwrt.git;a=commit;h=88fa9a8422a4352a34e33dc3229a561ec74e5b43 Historic releases like BB are generally out of date and no longer maintained. What does this option do for blocking/whitelisting? On one hand i see this file '/tmp/adb_list.overall' referenced, which contains blacklisted domains. Then on the other hand, you are suggesting to create /tmp/adb_list.jail which is supposed to contain whitelisted domains. Is there something in below syntax? Any help appreciated. Haven't been able to find much info on this serversfile option. Edit: I have figured this out with experiments. No further queries as of now. This approach appears to be working. Thanks @dibdot for you support. i want to do ip and URL whitelisting at same time. let me explain with an example. - i want to whitelist google.com and 188.8.131.52 - I have my syslog server hosted on Gcloud having ip this 184.108.40.206 when i whitelist google i cant ping the ip 220.127.116.11. Obviously i can't reach it because i have whitelisted google.com with the above method. @dibdot can you tell me how should i whitelist this 18.104.22.168 ip and URL at same time. Thanks in advace. This thread is a year old, you may wish to create a new thread in the future. Then simply use Adblock to whitelist it. Adblock doesn't block IPs, so that's OK. That's incorrect and not obvious. Connecting via IP should always work, unless you're using some other block software. In addition. whitelisting should allow access, not block it. user@not-blocked:~$ nslookup xxxxxxwebnews.com Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: xxxxxxwebnews.com Address: 78.140.190.xxx Name: xxxxxxwebnews.com Address: 78.140.190.xxx user@blocked:~$ nslookup xxxxxxwebnews.com Server: 127.0.0.53 Address: 127.0.0.53#53 ** server can't find xxxxxxwebnews.com: NXDOMAIN user@blocked:~$ ping 78.140.190.xxx -c 4 PING 22.214.171.124 (78.140.190.xxx) 56(84) bytes of data. 64 bytes from 78.140.190.xxx: icmp_seq=1 ttl=54 time=86.6 ms 64 bytes from 78.140.190.xxx: icmp_seq=2 ttl=54 time=86.2 ms 64 bytes from 78.140.190.xxx: icmp_seq=3 ttl=54 time=86.1 ms 64 bytes from 78.140.190.xxx: icmp_seq=4 ttl=54 time=86.6 ms --- 126.96.36.199 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 86.117/86.418/86.693/0.320 ms (xxxxxx == pushed) Hi where do you specify this option in adblock like @vgaetera mentioned " there's already an option in adblock for such restrictive "whitelist only" mode: [adblock_jail]" I'm on Adblock Version 3.8.15 & OpenWrt 19.07.1 r10911-c155900f66 / LuCI openwrt-19.07 branch git-20.029.45734-adbbd5c. I can't seem to find this option? Did you click Advanced? Yes I did, unless not understanding, this whitelist are for exceptions to the default /tmp/adb_list.overall. But how do you enable the /tmp/adb_list.jail in this version for restrictive whitelist only mode? Hi, sorry to hijack this thread. This feature has been removed a while ago ... you can simply do it yourself, check this post: Block all website url except some whitelisted here in this thread. thanks @dibdot , so what you're saying is that there is no luci gui to input the whitelist anymore and you have to manually edit the adb_list.jail file? Well, no one has asked for it for a long time, therefore I've removed this ... of course I can re-add this to adblock 4 again. fyi, adblock 4pre4 now includes jail list creation again (Adblock 4 pre-releases). This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00017.warc.gz
CC-MAIN-2022-40
6,015
58
https://discourse.libsdl.org/t/noob-questions-about-sdl2-audio-api-and-artifacts/31344
code
I’ve got two questions. First: is the pull API designed for doing the actual audio processing in the callback(as opposed to just copying from a buffer)? I guess what I want to know is whether it would be redundant to pull from a circular buffer that’s filled up by another thread. Second question: I had these pops and crackles that I couldn’t get rid of when testing with a simple square wave that I generate in code. I tried different formats and buffer sizes but to no avail. Then I upgraded from 2.0.12 to 2.0.14 and now the artifacts are gone. I looked at the changelog and skimmed the issues pages but couldn’t find anything about it. So for my sanity (because nothing makes me more nervous than when things magically starts working) I’m wondering if any bugs that could have caused the artifacts were fixed between the two versions?
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00385.warc.gz
CC-MAIN-2021-21
849
2
https://www.clayshare.com/forums/general-questions/45802-bailey-mini-roller-discount-code
code
Thanks so much for a fantastic ClayShareCon. I just tried the code for the Bailey Mini Roller I think it was: ClayShareMini25. Jess said the code in the demo called: Using GR Pottery Tools at the end of it. I tried both capital and no capital letters with no luck. Does anyone know what is the actual code? Has it expired?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817670.11/warc/CC-MAIN-20240420153103-20240420183103-00196.warc.gz
CC-MAIN-2024-18
322
4
https://www.catalyzex.com/search?query=Topic&page=58
code
Manually labeling documents is tedious and expensive, but it is essential for training a traditional text classifier. In recent years, a few dataless text classification techniques have been proposed to address this problem. However, existing works mainly center on single-label classification problems, that is, each document is restricted to belonging to a single category. In this paper, we propose a novel Seed-guided Multi-label Topic Model, named SMTM. With a few seed words relevant to each category, SMTM conducts multi-label classification for a collection of documents without any labeled document. In SMTM, each category is associated with a single category-topic which covers the meaning of the category. To accommodate with multi-labeled documents, we explicitly model the category sparsity in SMTM by using spike and slab prior and weak smoothing prior. That is, without using any threshold tuning, SMTM automatically selects the relevant categories for each document. To incorporate the supervision of the seed words, we propose a seed-guided biased GPU (i.e., generalized Polya urn) sampling procedure to guide the topic inference of SMTM. Experiments on two public datasets show that SMTM achieves better classification accuracy than state-of-the-art alternatives and even outperforms supervised solutions in some scenarios. In the United States, the parties to a lawsuit are required to search through their electronically stored information to find documents that are relevant to the specific case and produce them to their opposing party. Negotiations over the scope of these searches often reflect a fear that something will be missed (Fear of Missing Out: FOMO). A Recall level of 80%, for example, means that 20% of the relevant documents will be left unproduced. This paper makes the argument that eDiscovery is the process of identifying responsive information, not identifying documents. Documents are the carriers of the information; they are not the direct targets of the process. A given document may contain one or more topics or factoids and a factoid may appear in more than one document. The coupon collector's problem, Heaps law, and other analyses provide ways to model the problem of finding information from among documents. In eDiscovery, however, the parties do not know how many factoids there might be in a collection or their probabilities. This paper describes a simple model that estimates the confidence that a fact will be omitted from the produced set (the identified set), while being contained in the missed set. Two data sets are then analyzed, a small set involving microaggressions and larger set involving classification of web pages. Both show that it is possible to discover at least one example of each available topic within a relatively small number of documents, meaning the further effort will not return additional novel information. The smaller data set is also used to investigate whether the non-random order of searching for responsive documents commonly used in eDiscovery (called continuous active learning) affects the distribution of topics-it does not. In This paper we present a novel approach to spam filtering and demonstrate its applicability with respect to SMS messages. Our approach requires minimum features engineering and a small set of la- belled data samples. Features are extracted using topic modelling based on latent Dirichlet allocation, and then a comprehensive data model is created using a Stacked Denoising Autoencoder (SDA). Topic modelling summarises the data providing ease of use and high interpretability by visualising the topics using word clouds. Given that the SMS messages can be regarded as either spam (unwanted) or ham (wanted), the SDA is able to model the messages and accurately discriminate between the two classes without the need for a pre-labelled training set. The results are compared against the state-of-the-art spam detection algorithms with our proposed approach achieving over 97% accuracy which compares favourably to the best reported algorithms presented in the literature. The goal of this paper is to summarize methodologies used in extracting entities and topics from a database of criminal records and from a database of newspapers. Statistical models had successfully been used in studying the topics of roughly 300,000 New York Times articles. In addition, these models had also been used to successfully analyze entities related to people, organizations, and places (D Newman, 2006). Additionally, analytical approaches, especially in hotspot mapping, were used in some researches with an aim to predict crime locations and circumstances in the future, and those approaches had been tested quite successfully (S Chainey, 2008). Based on the two above notions, this research was performed with the intention to apply data science techniques in analyzing a big amount of data, selecting valuable intelligence, clustering violations depending on their types of crime, and creating a crime graph that changes through time. In this research, the task was to download criminal datasets from Kaggle and a collection of news articles from Kaggle and EAGER project databases, and then to merge these datasets into one general dataset. The most important goal of this project was performing statistical and natural language processing methods to extract entities and topics as well as to group similar data points into correct clusters, in order to understand public data about U.S related crimes better. Visual storytelling aims to generate a narrative paragraph from a sequence of images automatically. Existing approaches construct text description independently for each image and roughly concatenate them as a story, which leads to the problem of generating semantically incoherent content. In this paper, we proposed a new way for visual storytelling by introducing a topic description task to detect the global semantic context of an image stream. A story is then constructed with the guidance of the topic description. In order to combine the two generation tasks, we propose a multi-agent communication framework that regards the topic description generator and the story generator as two agents and learn them simultaneously via iterative updating mechanism. We validate our approach on VIST, where quantitative results, ablations, and human evaluation demonstrate our method's good ability in generating stories with higher quality compared to state-of-the-art methods. Online forums provide a unique opportunity for online users to share comments and exchange information on a particular topic. Understanding user behaviour is valuable to organizations and has applications for social and security strategies, for instance, identifying user opinions within a community or predicting future behaviour. Discovering the semantic aspects in Incel forums are the main goal of this research; we apply Natural language processing techniques based on topic modeling to latent topic discovery and opinion mining of users from a popular online Incel discussion forum. To prepare the input data for our study, we extracted the comments from Incels.co. The research experiments show that Artificial Intelligence (AI) based on NLP models can be effective for semantic and emotion knowledge discovery and retrieval of useful information from the Incel community. For example, we discovered semantic-related words that describe issues within a large volume of Incel comments, which is difficult with manual methods. A natural image usually conveys rich semantic content and can be viewed from different angles. Existing image description methods are largely restricted by small sets of biased visual paragraph annotations, and fail to cover rich underlying semantics. In this paper, we investigate a semi-supervised paragraph generative framework that is able to synthesize diverse and semantically coherent paragraph descriptions by reasoning over local semantic regions and exploiting linguistic knowledge. The proposed Recurrent Topic-Transition Generative Adversarial Network (RTT-GAN) builds an adversarial framework between a structured paragraph generator and multi-level paragraph discriminators. The paragraph generator generates sentences recurrently by incorporating region-based visual and language attention mechanisms at each step. The quality of generated paragraph sentences is assessed by multi-level adversarial discriminators from two aspects, namely, plausibility at sentence level and topic-transition coherence at paragraph level. The joint adversarial training of RTT-GAN drives the model to generate realistic paragraphs with smooth logical transition between sentence topics. Extensive quantitative experiments on image and video paragraph datasets demonstrate the effectiveness of our RTT-GAN in both supervised and semi-supervised settings. Qualitative results on telling diverse stories for an image also verify the interpretability of RTT-GAN. Recent years have seen significant advancement in text generation tasks with the help of neural language models. However, there exists a challenging task: generating math problem text based on mathematical equations, which has made little progress so far. In this paper, we present a novel equation-to-problem text generation model. In our model, 1) we propose a flexible scheme to effectively encode math equations, we then enhance the equation encoder by a Varitional Autoen-coder (VAE) 2) given a math equation, we perform topic selection, followed by which a dynamic topic memory mechanism is introduced to restrict the topic distribution of the generator 3) to avoid commonsense violation in traditional generation model, we pretrain word embedding with background knowledge graph (KG), and we link decoded words to related words in KG, targeted at injecting background knowledge into our model. We evaluate our model through both automatic metrices and human evaluation, experiments demonstrate our model outperforms baseline and previous models in both accuracy and richness of generated problem text. Electronic health records (EHR) are rich heterogeneous collection of patient health information, whose broad adoption provides great opportunities for systematic health data mining. However, heterogeneous EHR data types and biased ascertainment impose computational challenges. Here, we present mixEHR, an unsupervised generative model integrating collaborative filtering and latent topic models, which jointly models the discrete distributions of data observation bias and actual data using latent disease-topic distributions. We apply mixEHR on 12.8 million phenotypic observations from the MIMIC dataset, and use it to reveal latent disease topics, interpret EHR results, impute missing data, and predict mortality in intensive care units. Using both simulation and real data, we show that mixEHR outperforms previous methods and reveals meaningful multi-disease insights.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00284.warc.gz
CC-MAIN-2022-21
10,972
9
https://kb.cloudblue.com/en/129446
code
- Go to BA CP -> Operations > Subscriptions > Your_Subscription and click Synchronize button; One of the following errors will appear: Could not install Resources to PEMGATE: Parallels Operations Automation error #extype_id #110, module_id #db_service, Duplicate key violates unique constraint 'aps_account_context_pk' while executing 'error executing work'.. . Error Code: 0x23946d1d. Click here to find a solution Report ID: 993942 Table Subscription Resource doesn't contain row with id XXXXXXX, XXXXXXX. Error Code: 0x85c20417. Click here to find a solution Report ID: 993796 The same errors may also appear during order provisioning. The account in question was incompletely actualized in APS context in the past. Go to OA Management Node and execute the following command: /usr/local/pem/bin/saas_ctl sync_account XXXXXXX yes Replace XXXXXXX with affected account's ID. Check that the following OA task is completed: Propagation of APS 2.0 resources for the account #XXXXXX - Try to synchronize subscription again or re-submit the failed order.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00432.warc.gz
CC-MAIN-2018-30
1,050
12
https://lifesci.boun.edu.tr/en/functional-structural-brain-connectomes-and-applications-dementia-workshop
code
"Functional & Structural Brain Connectomes and Applications to Dementia" Workshop Organized by Boğaziçi University Electrical & Electronics Engineering Department's Volumetric Analysis & Visualization Group (VAVlab), this workshop is intended to bring researchers from different domains together over a series of invited talks, focusing on dementia from a clinical perspective, brain network modelling from theoretical/practical perspectives and their overlaps. The one-and-a-half day workshop programme is expected to provide an overview of the fundamentals and state-of-art, to highlight challenges, to suggest research directions/topics at the frontline and to foster collaboration. The idea that the brain is composed of functional and structural networks and that defects of these networks are either the cause or the result of the majority of neurological/psychiatric disorders, has long been widely accepted and is supported by studies. Relatively recently, the use of fMRI (functional MRI) and dMRI (diffusion MRI) has enabled researchers to model functional (fNET) and structural (sNET) networks based on 3D imaging data. Despite the previously unimaginable capacity that these modalities have provided, there are several open problems in data acquisition, data processing, network modelling and analysis. Solutions to these highly inter-disciplinary problems are expected to enable early diagnosis of neurodegenerative diseases, as well as their treatment planning and monitoring with objective criteria. Dementia is a major target in this respect. For details and online registration please visit workshop website.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00417.warc.gz
CC-MAIN-2023-06
1,627
4
https://www.freelancer.co.ro/portfolio-items/318363-app-design-for-packaging-gifts
code
For this project, the client needed a UI/UX designer to make an app that is specified for package making services for gifts. The app is designed for people who want to create gifts from scratch. With this app, a user can: - choose the package shape - package separation options from the inside - package color - package design - card type - card color - card style - goods that will be in the package and how many things they would want to include inside their gift (chocolate or sweets only for the moment) After completing all the requirements to make their gift, they can easily review its total price.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100674.56/warc/CC-MAIN-20231207121942-20231207151942-00833.warc.gz
CC-MAIN-2023-50
605
11
http://forums.devshed.com/iis-97/anonymous-ftp-access-947684.html
code
June 28th, 2013, 07:08 AM Anonymous FTP access Hi, im trying to configure anonymous FTP access in IIS7. I have a FTP server offering support files for download to ANYONE. I need to configure my FTP server to allow anonymous access to a specific directory. I have created a directory C:/FTP/public/ and in IIS managment console have added anonymous authentication and added anonymous users only. Basic authentication is disabled. The default username for IIS anonymous access appears to be IUSR with no password, when i pass ftp://[email protected] i get prompted for a username and password. How can i just login anonymously with NO password and pass this directly from a link such as <a href="ftp://[email protected]"><Support Downloads</a> Also, is it possible for these users to browse the contents of the FTP server via the web browser? Will it do this automatically or do i need some king of index? June 28th, 2013, 06:45 PM I have no idea about FTP, but the anonymous user account for IIS HTTP is IUSR_<computername> not just IUSR. You can review the user accounts and groups on the server if you have admin access to it. I've never been able to appreciate the sublime arrogance of folks who feel they were put on earth just to save other folks from themselves .." - Donald Hamilton
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120206.98/warc/CC-MAIN-20170423031200-00213-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,301
10
http://erfixitgirl.blogspot.com/2010/08/i-think-its-time-to-go-private.html
code
Monday, August 2, 2010 I Think It's Time To Go Private So I'm wondering, what do you think about going private with my blog? If you are my friend and you are reading it then it really doesn't matter then right? I've just been thinking that anyone can access my "private" not so private life just by entering a few words into google and if they match what is in my blog then anyone can see it. Those of you who have private blogs what do you think? I'm thinking that I will go private in a few days. If you can't stand to be without knowing what is going on in my not so interesting life or rather what my thoughts are, then by all means send me an email and let me know that you want to have access to my mind and I will add you to my list of friends. It's not like I have anything to hide, it is just weird to have who ever, where ever looking at what I write. What do you think?
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00200.warc.gz
CC-MAIN-2018-30
880
3
https://stackoverflow.com/questions/11174851/how-to-use-zip4j-to-extract-an-zip-file-with-password-protection
code
I am trying to unzip a zipfile with password protection. I know there is a java library named "zip4j" that could help me. But I am failing to open the zip4j website to see the tutorial. I had download zip4j library with another mirror but I don't know how to use it. Is there anyone that could paste example code for using zip4j unzip password protection zip file? thanks so much!
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201812.2/warc/CC-MAIN-20190318232014-20190319014014-00427.warc.gz
CC-MAIN-2019-13
380
3
https://community.ortussolutions.com/t/the-view-general-dspstart-cfm-could-not-be-located-solved/1593
code
I wanted to check Coldbox AJAX integration, and tried to install the ColdBoxReader sample. I created the database, datasource fine. When I browse to the application's home: .../coldbox/samples/applications/ ColdBoxReader/ I receive the following error: Application Execution Exception Error Type: plugins.renderer.ViewNotFound : [N/A] Error Messages: View not located The view: general/dspStart.cfm could not be located in the conventions folder or in the external location. Please verify the view name and as I was writing this, I figured out the solution. Problem: All files under the views directory have small case. The views are called using camel casing... and I use an operating system that distinguishes between file name casing (linux) The solution: rename all files to have the proper file names. Could the next release include proper casing for the file names? Do you still use a filesystem that doesn't care about file name case? Thanks for a great framework in any case!
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00606.warc.gz
CC-MAIN-2022-40
983
17
https://pulse.box.com/forums/909778-help-shape-the-future-of-box/suggestions/42618682-candidate-destinations-are-not-displayed-when-invi
code
Candidate destinations are not displayed when inviting files to collaborate(Box for iOS) Box for iPhone/iPad and Box for EMM don't offer suggested recipients when you invite collaborators to a file. When inviting a collaboration to a folder, a list of possible recipients is displayed. Please do the same in the mentioned app. This suggestion is under consideration by the Product Team for future development, however, it is not on our roadmap. Please share additional feedback and use cases to help us understand the importance of this release. Thank you for helping make Box better!
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00278.warc.gz
CC-MAIN-2022-33
584
5
https://networkfinds.com/can-you-teach-yourself-to-code/
code
The conventional way to become a coder is to get a formal college education. However, teaching yourself is an option if earning a bachelor’s degree is not an option. However, before opting for this method, you should know that it will not be easy since coding is highly technical, featuring intricate data structures and different algorithms. Fortunately, self-teaching yourself coding is convenient since you can go as slow or as fast as you wish. In addition, you can focus more on the areas you feel you need more practice in and less on the parts you have already mastered. How to Self-Teach Yourself Coding Utilize the numerous online resources for learning to code, including books, video tutorials, and programming websites. You must be able to stay disciplined and motivate yourself if you decide to learn to code independently. If you are not good at studying alone, then join coding classes with mentors or tutors but keep in mind that such courses will cost you more. Moreover, it is paramount to find a mentor in the UX design field. You should also ask yourself if you are capable of paying to study or looking to take advantage of the free resources. Fortunately, the internet offers both options. To become an impeccable coder, you must stay on top of what is happening in the industry because this world is ever-changing. You also need to be flexible and willing to evolve. Additionally, develop your problem-solving skills, editing and detail skills, database management, communication skills, and typing skills (you need to be fast). 8 Free Sites To Learn Coding Free coding websites help you hone your coding skills without paying a dime. You will be able to learn, engage, practice, code, identify your mistakes, and monitor your progress (know the areas you have mastered and the ones you haven’t). Below are the most reliable, free sites to learn to code. This is one of the most popular sites for learning to code, offering many coding courses. Managers, employers, and, most importantly students can benefit from this website, helping them develop skills. Coursera provides free tutorials, courses, and plenty of other resources curated to help you become an excellent coder. The online learning platform partners with more than 200 companies and universities to offer diverse courses. Top university professors compile the learning resources and teach the classes. There are tons of free courses on Coursera, but you will need to pay to access some classes. Also, you will need to pay to get your completion certificate. Everything on this platform is free, and you can choose a learning schedule that works for you. You may need to use a different platform as your skills advance since some reviews indicate that FreeCodeCamp’s learning materials are better suited for beginners. This website is ideal for intermediate and beginner learners offering many coding challenges you can directly solve on the platform. It also provides a community you can compare and share solutions and use to build your understanding and skills. It will help you boost and hone your skills through repetition and practice, and the challenges are in various coding languages. Codewars is also entirely free, but it does not offer structured courses or lessons. In addition, you must prove your coding skills to sign, so use this platform only if you have some skills in coding. 4. Khan Academy This website makes learning programming easier, offering lessons in various programming languages, and it is completely free. It is the perfect tool for beginner coders. Khan Academy is an excellent place to begin your coding journey for games or art. However, it is not the best resource if you are interested in business-oriented development. It covers different subjects using expert-taught video tutorials. The popular non-profit organization has mobile applications for Android and iOS and does not have in-app purchases or need any subscriptions. This website is also child-friendly, and you can track your learning progress. However, keep in mind that the quality of the content varies. This is another learning website that works with more than 160 universities providing over 3000 valuable, high-quality courses. This open source was founded in 2012 by MIT and Havard University; hence users learn cutting-edge theories and technologies. edX is suitable for students who value formal education because it offers college-level computer science and computer programming courses from the world’s most reputable universities. Part of your learning involves virtual environments, tests, and quizzes. In addition, since edX is a free resource, you can study coding at your speed. This website also has a premium version with exams, graded assignments, and certification. The downside of edX is the course inconsistencies. 6. Geek for Geeks This free e-learning website focuses on helping users learn computer science and programming. Make use of the quizzes, articles, contests, courses, and tutorials on this platform to become an excellent coder. Also, watch out for the job listings. You can find valuable and extensive content on different concepts and topics on Geek for Geeks to help you advance your coding skills. The codes in this platform cover various programming languages, including Python, C++, C, and Java. Geek for Geeks is also an excellent resource and reference source for understanding competitive programming and taking part in different events. It also provides the fundamentals needed by beginner-level developers. Besides interactive, useful code blocks, W3Schools also provide valuable information in a documentation style. The programming tutorials on this platform will help you improve your coding skills, and it also offers excellent examples for experimenting with code blocks. In addition, it has great courses, exercises, and references. You can use this helpful website to further your coding and programming skills using beneficial resources. It runs crash courses like the 30-day challenge designed to help you gain a deeper understanding of programming languages. This platform is great for companies and developers alike. On the one hand, it helps developers strengthen their coding skills, and on the other allows companies to recruit their best candidates. HackerRank is a great place to be if you seek to be employed as a programmer. Independently learning to code requires using the best resources. With so many options today, finding the one that suits you best isn’t easy, so you must research, try different sites, and ask for suggestions. The list compiled above contains valuable, trustworthy sites to aid your learning.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100705.19/warc/CC-MAIN-20231207221604-20231208011604-00859.warc.gz
CC-MAIN-2023-50
6,678
28
https://linbit.com/blog/what-is-rdma/
code
DRBD9 has a new transport abstraction layer and it is designed for speed; apart from TCP the next generation link will be So, what is RDMA, and how is it different from TCP? The TCP transport is a streaming protocol, which for nearly all Linux set ups means that the Linux Kernel takes care to deliver the messages in order and without losing any data. [1. While specialized hardware is available and helps a bit by calculating the TCP checksum, we have seen that these can make more problems than they solve.] To send these messages, the TCP transport has to copy the supplied data into some buffers, which takes a bit of time. Yes, zerocopy-send solutions exist, but on the receiving side the fragments have to be accumulated, sorted, and merged into buffers so that the storage (harddisks or SSD) can do its DMA from continuous 4KiB pages. These internal copy functions moving into and out of buffers cause one of the major bottlenecks for network IO, and you can start to see the performance degradation in the 10GBit/sec performance range, it continues to severely limit performance from there on up. All these copy functions create and cause higher latency, effecting that all important IOPS number. We talk about this in our user guide: Latency vs. IOPs. In contrast to that, RDMA gives network hardware the ability to directly move data from RAM in one machine to RAM in another, without involving the CPU (apart from specifying what should be transferred). It comes in various forms and implementations (Infiniband, iWarp, RoCE) and with different on-wire protocols (some use IP, can therefore be routed, and so could be seen as “just” an advanced offload engine). The common and important point is that the sender and receiver do not have to bother with splitting the data up (into MTU-sized chunks) or joining it back together (to get a single, aligned, 4KiB page that can be transmitted to storage, for example) – they just specify “here are 16 pages of 4kiB, please store data coming from this channel into these next time” and “please push those 32KiB across this channel“. This means real zero-copy send and receive, and much lower latency. Another interesting fact is that some hardware allows splitting the physical device into multiple virtual ones; this feature is called SR-IOV, and it means that a VM can push memory pages directly to another machine, without involving the hypervisor OS or copying data around. Needless to say that this should improve performance quite a bit, as compared to cutting data into pieces and moving them through the hypervisor… 😉 Since we started on the transport layer abstraction in 7d7a29ae8 quite some effort was spent in that area; currently we’re doing a few benchmarks, and we’re about to publish performance results in the upcoming weeks – so stay tuned! Spoiler alert: we’re going to use RAM-disks as “storage”, because we don’t have any fast-enough storage media available…
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00338.warc.gz
CC-MAIN-2021-25
2,969
10
https://www.eventbrite.com/e/tampa-sharepointoffice-users-group-october-tickets-884434367?aff=eorg
code
SharePoint Saturday Tampa Registrants, I want to thank you all again for making our last two events so popular. The reason I writing you now is to announce the formation of a User Group for the Tampa area. Some of the feedback from SharePoint Saturday was for more Power User and Business Analyst sessions. One of the goals of this group is to address not just all things SharePoint but how other Office and Microsoft based solutions are executed and published via SharePoint. We hope to draw nationally know experts to speak here in Tampa but we also want to create spaces to develop and show off our area's expertise, so start thinking about those presentations you have been thinking about doing. Our first formal meeting will be October 12th and we will again be guests at Microsoft Offices in Tampa. I'll send the agenda out as soon as I have our speaker 100% confirmed. For non-Tampa folks I realized that some of you are not local but apart from the announcement of the Tampa group I can also announce that there are going to be another SharePoint Saturday in South Florida starting on December 4th in South Florida at Nova University in Ft Lauderdale, FL. Right now we have calls for speakers and sponsors out so if you are interested in either please let either I or Chuck Hughes know via the site. If you can't make that one we are also initial phases of coordinating SharePoint Saturdays with other user groups for Orlando, NASA, and of course SPS Tampa will be back for sure in 2011. I will send out those announcements as they come in. If you need or want more info on other user groups here are the links to those: South Florida: http://sfspug.com/default.aspx If you prefer to get only Florida SharePoint community mails, let me know so I can get you the announcements you want. On that note, there is a Florida SharePoint Community Central Portal in the works that we hope to be the source for all news for our community endeavors. If you are unable to make the Tampa meeting we hope to see you at SharePoint Saturdays coming throughout Florida. If you have any questions please contact me. When & Where Michael Hinckley MCSA, MCITP, MCTS, has over 10+ years specializing in solution architecture for organizations that span from small businesses and global corporations. He is currently the SharePoint Practice Program Manager at Tangram. Michael is a recognized speaker and evangelist for Microsoft SharePoint and Business Intelligence stacks. He organizes SharePoint Saturdays throughout Florida and runs the Tampa SharePoint/Office User Group. He is a contributing author for the book Microsoft SharePoint for Business Executives: Q&A Handbook.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00247-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,664
14
https://www.browncafe.com/community/threads/ground-or-express-handler-question.356797/
code
Hello. I've been with Ground as a PT package handler for 3 months now. I don't mind the physicality of the work but I'm moving closer to an Express location soon and wondering what a Material Handler position would be like. My work with Ground so far has been a solid 20hrs/week with the option of maxing out at 30 as a part-timer. According to the ad, the Express job would be roughly the same 20 hours, but at a better wage. I have read however about the 2-3 hour unpaid "breaks", in which one basically waits around for a plane to land??? Will anyone elaborate on this and/or about the duties of the job in general? Will I have the chance to pick up extra hours if need be? Would I even be considered as an Express Handler w/ only 3 months loading experience or should I stick it out with Ground for a little longer? Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891926.62/warc/CC-MAIN-20180123111826-20180123131826-00614.warc.gz
CC-MAIN-2018-05
827
1
http://gmatcentral.org/plugins/viewsource/viewpagesrc.action?pageId=16318578
code
Before starting, make sure that you have cloned GMAT from the GSFC-internal repository. For public developers, you can obtain the source code from SourceForge ( The Git command is: "git clone ssh://[email protected]/p/gmat/git gmat-git") These instructions refer to <GMAT> as the top-level GMAT repository folder. Figure 1. The <GMAT> repository layout The GMAT build process can be broken down into four main steps: The first two steps are generally "one-time" processes that are performed immediately after downloading the GMAT repository. They result in a build system (e.g. Visual Studio solution or makefiles) that will intelligently rebuild GMAT components as needed when source or configuration files are changed. Table 1 describes all software dependencies for GMAT. Table 1. GMAT Dependencies |Name||Version||Used in GMAT||Download| |CSPICE||N0065||Core Dependency||Configure Script| |Xerces||3.1.4||Core Dependency||Configure Script| |Python||3.4, 3.5||PythonInterface Plugin||Mac, Windows| Linux (Package Manager) <GMAT>/depends folder contains scripts to automatically download and configure the core GMAT dependencies. <GMAT>/depends folder layout after dependency configuration configure.bat(Windows) script to set up core GMAT dependencies. <GMAT>/dependsfolder structure should look like Figure 2. Requirements (in addition to Step 1 requirements): Launch Matlab, run the command: Tip: Select the "Grouped" option in the CMake GUI (Figure 3) to sort CMake variables and make them easier to find. Tip: Select the "Advanced" option in the CMake GUI (Figure 3) to display variables, such as PYTHON_LIBRARY, that are normally hidden. Tip: All CMake commands can also be performed on the command-line instead of using the GUI. See below for instructions. Figure 3. Components of the CMake GUI <GMAT>cloned repository on your computer <GMAT>/build/<OS>-cmakebuildfor this value. Figure 4. Choosing a generator in CMake 4b) will be populated: Figure 5. Output of CMake Configure Use the Configure results output box (Figure 3 section 4a) to change variables in the CMake variable list (Figure 3 section 4b) as follows: In addition to errors, there are several CMake variables that allow you to control how the build system configures GMAT: |CMake Variable (Group)||Description||Associated CMake Error| |Path to CSPICE root directory| |CSPICE NOT FOUND (make sure to run depends script from Step 1)| |Path to F2C root directory| Note: this should generally be |F2C NOT FOUND (make sure to run depends| script from Step 1) |On makefile systems, this specifies the desired build type | On VisualStudio/XCode systems, this specifies all possible build types Note: On makefile systems, you should create a separate out-of-source build folder for each desired build type (Figure 3 box |Location to install GMAT when doing | or building the VisualStudio-> Full path to the top-level GMAT Proprietary Plugins (folder that contains CMakeLists.txt) This will be automatically found if you name it |Path to MATLAB root directory | (on Mac, this is the path to MATLAB_R20xxx.app) |Matlab NOT FOUND (make sure MATLAB | |Whether to build a particular GMAT Plugin | Note: the proprietary plugins only show up here if |Mac/Linux: Path to wxWidgets | (usually this is the wxWidgets Windows: Path to wxWidgets |wxWidgets NOT FOUND (make sure to run depends script from Step 1)| |PYTHON_LIBRARY (PYTHON) (Advanced Variable)||Requests that a specific installation of Python be used for GMAT's PythonInterface plugin. If this variable is blank, the latest found version of Python will be used. Set this to the FULL PATH to pythonXX.lib (e.g. python35.lib) if you installed Python to a custom location.||Python NOT FOUND (make sure Python is installed, and variable is set properly)| |Xerces library and include folder locations.||Failed to find XercesC (make sure to run depends script from Step 1)| When all CMake errors are handled and you have specified all desired GMAT options, click "Generate". CMake will create the build system in the chosen out-of-source build folder (Figure 3 box 2). CMake is fully scriptable and can be called from the command line instead of using the GUI. This is especially useful on operating systems (e.g. Red Hat Linux 7) where the GUI is unavailable. cd <GMAT>/build; mkdir macosx-cmake; cd macosx-cmake <GMAT>folder and specify options cmake [options] ../.. -DPLUGIN_CINTERFACE=OFF. Multiple such options can be specified. GMAT.slnVisual Studio solution. After loading, you should see the following projects: Figure 5. CMake-generated VisualStudio2013 Solution ALL_BUILD: The default startup project. Ensures that all other projects are up-to-date, then builds them. ZERO_CHECK: Performs the work to ensure all other projects are up-to-date. It is automatically built along with all other projects. INSTALL: Creates a standalone GMAT folder containing all executables, plugins, data files, samples, and documentation. GmatGUI, Plugins: The various GMAT components. You can build these individually if desired. ALL_BUILDproject. Depending on your system speed and number of selected GMAT components, this may take a while! INSTALLproject if you want a fully standalone and relocatable version of GMAT. Makefiles are run through the command line, which on Mac and Linux can be accessed via the Terminal application. In these instructions, <CMake_build_path> is the path to the build system folder that you chose in Step 2 (Figure 3 box 2). $is your command prompt): $ cd <CMake_build_path> make -jN" to significantly speed up the compile time make install" if you want a fully standalone and distributable version of GMAT. See the following table in case there are build errors when compiling or installing GMAT. Unresolved External Symbol *_Py_* referenced in function ... |Python Interface||32/64-bit Python found by CMake is different than architecture of compiler (VisualStudio, gcc, ...)||Make sure to install the correct 32- or 64-bit version of Python and specify it via the PYTHON_LIBRARY CMake variable.| |CInterface Matlab thunk files not produced during INSTALL step||C Interface||There is a known incompatibility between Matlab R2015a/b and XCode 7 that prevents the CInterface thunk files from being built.||Perform the Matlab initialization instructions.| After building, you have several options for how to run GMAT: CMAKE_INSTALL_PREFIXfolder that you chose in Step 2. The GMAT executables will be in the <GMAT>/applicationdirectory. You can run GMAT from the debug/) subfolders without having to perform the optional INSTALL step. This allows for a more rapid edit-build-test development cycle. On Windows, you can also run the GmatGUIprojects directly from within VisualStudio. This allows for in-program debugging with breakpoints. Right-click on the GmatGUI project, and select "Set as Startup Project" Select menu item Debug -> Start Debugging (or Start Without Debugging for Release configurations) wxWidgets v3.0.2 has a known bug (documented here) on Mac OSX 10.10+ that causes a build error. As of R2016a, the GMAT dependency configuration script ( configure.sh) implements this fix internally, so GMAT users do not need to take any additional action for wxWidgets to build on Mac.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249550830.96/warc/CC-MAIN-20190223203317-20190223225317-00443.warc.gz
CC-MAIN-2019-09
7,240
94
https://blender.stackexchange.com/questions/93601/ruler-doesnt-start-from-mouse-scale-on-object-pose-mode-is-different
code
The ruler's start point appears at certain distance from the mouse As for the issue on scale, I've set my scene to metric. I'm using armature to drive keyshapes so on object mode the armature's location is 1cm = 1cm while I was in pose mode, moving the bones, 1cm = 10cm. Anyone knows what's wrong? This is the 3rd time I'm using this method and it worked fine in other previous projects I've tried resetting the user preference but it didn't work. Let me know if there's anything else I need to show or clarify.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00517.warc.gz
CC-MAIN-2020-05
512
5
https://launchdarkly.com/blog/test-in-production-a-panel-discussion-on-debugging-kubernetes-in-production/
code
Test in Production: A Panel Discussion on Debugging Kubernetes in Production The final session from our April Test in Production Meetup featured a panel discussion led by Albert Wang, Senior Software Engineer at Uber. He spoke with the featured speakers, Michael McKay, IBM, and Andrew Seigner, Software Engineer at Bouyant. “The Watson team, that’s the latest trademark that everyone’s hearing from IBM, they have a huge deployment, a huge set of Kubernetes clusters, and they had the exact same problem we had. It’s that they’ve got hundreds and hundreds of deployments, and how did they manage those? And so we feel we have a pretty good solution.” – Michael McKay, IBM Watch the discussion below to hear more details about how Michael and his team are deploying using Kubernetes and feature flags at scale, and how Andrew and his team are debugging Kubernetes clusters in production. If you’re interested in joining us at a future Meetup, you can sign up here. Albert: All right, everyone. Let's get started with a Q and A. I want to thank you guys ... What? Albert: I want to thank both of you guys for a pretty deep technical, but still accessible presentations. I will open it up to the audience, but my first question is actually for Michael, which is: what's really interesting is to see that you guys are using feature flags for sending your deployments. It's an interesting application, and I was wondering, since feature flags are oftentimes used in the services I assume that you're deploying, what kind of relationship that you might have between the deployment feature flags and perhaps more business logic feature flags that are used within those services? Michael: All right. So what's actually interesting is that, we started using feature flags specifically for deployments before we actually started using feature flags for what typically people use feature flags for. Right now, there is no connection. A deployment's a deployment, and then a new feature going into code with a feature flag are completely independent of each other. In fact, the cool thing about LaunchDarkly, it actually allows us to keep that separate. Michael: So we have a separate project just for deployments and has its own SDK key for the clients. We have a separate one just for our user feature flags. So as of right now, we don't do that and our typical model is that our deployments ... We actually never introduce new features with a deployment and because of that, by definition, we have that disconnect, which just by processes themselves, that we really don't relate deployments with new features, if that makes sense. It's interesting. Albert: Yeah. So what will the process be as, perhaps a developer of one of these services? I have some new code. I have ... It's, I assume, gated by a feature flag, so I would deploy by setting the deployment status feature flag, and then I would go and change my actual logical feature flag within my service? Michael: Yep. Actually, we had a really good example. We had this big IBM conference, and we wanted to release a new capability. One thing we never had before, which seems almost impossible to think about now, is there's no way just to jump to your Kubernetes dashboard from the IBM Cloud Container Service. So we actually put this new feature in, and when we first put it in, it was hidden. No one saw it. It was sitting behind the feature flags. So we did a deployment probably about two weeks before it was even exposed to anyone. Michael: What the developers did is the developers wrote the code with the feature flag enabled in their code for the UI, and they had rolled out that code weeks previous to the conference. And then during the conference, we could then use the feature flags to opt people in. Actually, we were opting people by email domain at the time. Michael: So that's kind of a case where the developer did the normal process, where they'd code it, and push their code out to the environment. Then when they're ready, they use the separate project in LaunchDarkly to actually enable that feature flag for that new feature. Albert: Interesting. Pretty interesting process there. Hopefully ... It's sounds like it'll prevent quite a few outages. Michael: It's a learning experience for us, to stop rolling out new features via deployments. So this has been really good for us. Albert: Very nice. And my second question for Andrew is: so your debug process seems to be pretty efficient in being able to introspect, I guess, the logic of stateless services. And I was wondering how it would work with, say, a stateful service, or perhaps even a database. Andrew: Sure. Let's say you have a service talking to a database, we can give you TCP information on that today, so you could see how many bytes are going through, what's the through put and all that. We don't have success rate on that kind of connection yet. We've actually gotten some GitHub issues requesting Reddis, for example. I think, as there's more interest in particular protocols, we will be making the russ proxy aware of those protocols, and then be able to provide that kind of information for those connections. Albert: Okay, very nice. So it seems like you guys support of variety of different transport layers. I guess what I was getting more towards is you may have some edge cases that may depend on the state of the various values of ... perhaps in your database or within your services. And depending on that state, you may trigger, perhaps a few if statements or whatever, which may change behavior that your service has and perhaps cause an error that might be omitted. Andrew: Yeah, so that's interesting. You mentioned changing behavior based on responses and things like that. We have an older service mesh that's much more rolled out in production called Linkerdy, which allows you to classify behavior of your application based on certain types of request. So if you decide that get requests re-triable, but post requests are not, something like that, all of that's configurable. One of the lessons we took from that technology was that it was so configurable that the answer to that was us building Condo, which requires very little config. I think at some point down the road, we're going to have to learn how to give users the ability to control that kind of behavior, depending on how they want their app to behave. Albert: Very nice. So let's open this up to questions from the audience. Does anybody have any questions? Michael: Basically because IBM, instead of one big company, is like a million smaller companies. For example, our organization for just IBM Cloud Container Services, our tribe is about 200 people. That covers prog management, management, developers, support, everything like that. So when we talk about this, it's really just corralling those 200 people, and even just a subset of developers around the process for doing that. Michael: What's kind of cool at IBM is that being such a big company, we can actually do internal meetups, and you'll get to talk to people that are having the same experiences, but they know how to fix it. So we've been going on this dog and pony show, just at IBM, saying that we've got this really cool deployment process using Kubernetes and LaunchDarkly, and getting people on board to be using that as well. Michael: In fact, the process that we're doing today with LaunchDarkly and Kubernetes, we're going to start exposing it to other teams in IBM. The Watson team, that's the latest trademark that everyone's hearing from IBM, they have a huge deployment, a huge set of Kubernetes clusters, and they had the exact same problem we had. It's that they've got hundreds and hundreds of deployments, and how did they manage those? And so we feel we have a pretty good solution. We're kind of internal evangelizing this and spreading the word in getting people to use it. Michael: I mean, I've known about feature flagging for a while. In fact, my previous job before this one I'm working on right now, we had actually looked at LaunchDarkly and Optimizely. You do a Google search for feature flags, and it's Optimizely and LaunchDarkly are right at number one and number two, and I think they flip daily. There's some groups in IBM that are using Optimizely today, and we actually looked at Optimizely for this process instead of LaunchDarkly, but what we've found is that Optimizely is more suited for experimentation, canary testing, things like that. You can do feature flags, but it doesn't seem to be their core offering. Michael: Then we started using LaunchDarkly, and what we really loved is just the simplicity of it, the power of it, and you guys have actually been really great for support, as well. Michael: So what was interesting is that, when I came to this new job, we had this deployment process and the fact that I had used feature flags in my past job, that's where that mental leap of faith came into play where users, clusters, came into play. Michael: Then we said, "Hey, we should use feature flags for deployments too". Having said that, we've been using feature flags for deployments, and we're now just starting to use feature flags for what they're typically intended for. Michael: Trust me, we've thought about this. We looked at LaunchDarkly, and we were like, "We could probably do this. How hard could it be?" Michael: And we've actually learned from our past mistakes. One of our mottoes is: if you don't have to run it yourself, don't run it yourself. IBM itself ... I mean, it's a 100-year-old company that's been going through some major transformations in the past few years. One of those is actually really starting to look at smaller companies. How do they work? What makes them successful? Part of that is actually starting to get outside of the company. Start picking up existing services because what you can find is that, using a feature flag service like LaunchDarkly ... Sure we could write it ourselves, but you'd end up spending probably a couple hours of magnitude more money just in resources, and compute, and stores. I mean, just all this stuff, and you probably would end up something that was not even nearly as good as what LaunchDarkly provides. Michael: One thing we're learning is that, we don't want to try to be experts in everything. We don't want to be an expert in building our own feature flag service. We don't want to figure out how to run Mongo databases. We don't want to figure out how to do all this stuff, so we're passing everything off to people who actually are experts in these areas. Michael: Actually, IBM is really good about this. Up until now, literally I've been putting this 00 a month bill on my Amex, and we're finally actually getting through ... Probably the biggest hurdle is just getting through the IBM procurement process to get a PO generated, so we can just start charging our LaunchDarkly bill and not have to submit an expense report every month for this thing. But it's been really good. Albert: So I have another question for Andrew. You have an interesting way of essentially instrumenting services. It seems you're essentially wrapping these services with a proxy layer. In my own work, I've also seen a lot of people moving towards sensory distributor tracing, using context provocation. I was wondering how your work compares. Andrew: Yeah, for sure. We've worked a lot with systems that leverage distributed tracing. That previous service mesh I mentioned that's still out there in production, that has first class support for distributed tracing. What we found with conduit being ... One of the design goals of being zero config, to do distributed tracing well, you kind of have to instrument your app. You have to probably make some changes to your application code. Andrew: Our thinking is that there are certain cases where distributed tracing is the only tool that'll work. If you have one very slow, long request, and you want to understand everything it's going through, and follow it, and pinpoint what's the slow piece, distributed tracing will give you that. Andrew: Our thinking is that for 90% of use cases, as far as why is it slow, why is it broken? Providing these tools that just let you look at each hop and understand what are the success rates and latencies between your services, we think we get you most of the way there. Andrew: There's definitely been requests to add a distributed tracing support into the new service mesh. We're thinking about it. Albert: Very nice. Yeah. Distributed tracing is certainly gives you a lot more. Some certain benefits, but is a lot more work to implement. Whereas your proxy, seems to be very ... As you say, zero config, so a lot easier to implement. Andrew: Yeah. That was definitely a lesson learned from Linkerdy. We have lots of people using that, and they start using it, and the next question is: how do I make distributed tracing work? Andrew: It's like well, your app needs to pass headers around, which is fine for some folks. For others, they want something that just works out of the box, without changing anything. So it's a bit of a trade-off. Michael: Actually I've got a question for Andrew. Andrew: Yes, sir? Michael: We've got a lot of customers coming from the financial industries, and they love encryption. What they're finding is they don't like the idea that once you get past the ingress, that nothing's encrypted anymore. So how would you deal with something like, if the ... themselves are actually exposing HTTPS end points. Andrew: Yeah. We're doing a release tomorrow, and we're doing another release in about a month. And that release will include TLS between every connection in your cluster. The idea is that we will run a certificate authority for you. Again, you don't do anything, and it will encrypt every connection between all of your services, even if they're just doing HTP-1 or whatever, we'll just enable that for you if you want. Michael: Are you selling that as a separate offering there? Andrew: Nope. It'll all be part of the open-source thing. Michael: That's pretty cool. Andrew: Our road map is on that website, and all the codes there, and all the issues are there. So have a look. It's coming soon. The work we're going to do in the next release, it's already in the design phase, and it's in GitHub issues as is. So it's there to check out. Andrew: So if you have input and opinions, definitely bring it in. Albert: So I have another question for Michael, which is: how does the observability or visibility story of work with IBM cloud? I guess, again, as a scenario, I'm a developer. I deploy, how do I say, make sure that deploy is safe, that there aren't any exceptions or errors? And if there are, then is there a way for me to automatically detect it or perhaps even take action? Michael: Yep. So we have a variety of means for that. The first step of visibility is what we showed you with the razadash where we can just see, this thing actually got deployed. Now, if there's an error during deployment, the cluster updater basically is saying, "Hey, LaunchDarkly, which version do I need to run for this particular microservice?" Michael: Then it'll just apply that to Kubernetes. Now if Kubernetes comes back and barfs, and says, "You got mail from YAML," or, "Can't connect to the API server," or whatever the issue may be, we will actually trickle it back up to the razadash service. It all shows up in the same view, where their deployments go. Michael: Now once it actually gets running into the environment, we do run a set of tests. For example, our stage environment, and even our production environment. So we do have some indent tests that we run, and those test results also percolate back up to razadash. So part of our visualization story is: let's just push everything to this one place. From our point of view, the date is cheap. That razadash database, it's a Mongo database, and it's one gigabyte, so it's a drop in the bucket, considering how much value we get out of it. Michael: Now in terms of additional logs like the standard out, standard error, or the other log files for the pods themselves, the Cloud Container Service handles that for you. So all the logs are actually are forwarded to our log-man service. You can go to Kibana dashboard, and you can see all the logs from all your applications across all the clusters in one spot. Michael: It's a lot of data but you can see that. Andrew: The origin story of our company is we were a bunch of infrastructure engineers from Twitter. We all joined when the site was down constantly, and we were involved in migrating it from the largest rails app on the planet to a bunch of microservices. And in doing that, we found that things like retries, and back off, and telemetry, and routing policy needed to be centralized. We had a lot of teams that, all of a sudden, started building that same thing. Andrew: That eventually coalesced into a library called Finagle. It's a library written in Scala. That's what our first service mesh was built around. Linkerdy wrapped Finagle. And then Conduit was a lot of lessons learned from that. We found the application developers, you've get a lot of teams, and they may be writing in different languages, using different types of interfaces to talk to each other. The service mesh really gives a uniform way for all of your services to communicate. That said, it's another piece in your infrastructure. Right? Andrew: We're super-sensitive to that, and that is a lot why we're emphasizing zero config so much in this new one. The adoption we've had with Linkerdy has been great, but one of the comments is definitely it's a lot of work to set up. It's a lot of config that you have to learn to get it to work the way you want it. Michael: I had another question for you too. So have you guys ever thought of just ... as helm charts? Andrew: Yes. So that's an open GitHub issue, as well, right now. Michael: All right. Andrew: So we've had requests to have it automatically integrate it with Helm. That's probably coming. It may be in the works already. Michael: I did want to mention one more thing about just your previous question. So we actually modeled our visibility part, and when to deploy, off of GitHub. What's interesting with GitHub, is that not only do they roll out code to a small portion of users, they will check logs. They'll run tests, but what's most interesting, they actually will check their Twitter feed to see if they have issues. So that's part of their feedback to know when their deployment is done, is they haven't had any people tweeting about GitHub being down. Albert: The good old Twitter-based debugging. Andrew: We use that ourselves, for sure. Albert: Thank you.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00504.warc.gz
CC-MAIN-2021-43
18,753
64
http://www.geekstogo.com/forum/topic/88725-my-internet-explorer-is-running-slow/
code
My Internet Explorer is running slow! Posted 01 January 2006 - 07:35 PM Posted 01 January 2006 - 10:12 PM I guess I was wrong.. Posted 03 January 2006 - 07:51 PM Also remember that we are all volunteers and right now is holiday season and replies are slowed down. About your problem with IE, have you tried cleaning out your temporary internet files and cookies? Also have you defragmented your computer in a while? If you think it's a problem with IE itself, try repairing it here: http://www.geekstogo...als-t87507.html Also, a lot of times when browsers are slowed down, it has something to do with malware. Try running a virus scan. Posted 03 January 2006 - 09:01 PM no thanx to any of the help given out.. i finally found out what the problem was.. it was because of my FireWall. the FireWall I was useing was slowing my computer down because it would have to keep notifying me about things and many of stuff. the FireWall I was useing was OutPost FireWall. Yay! I'm so happy that I fixed my internet explorer all on my own! although, i am quite sure that some of the things given to help fix my internet explorer probabally did fix some issues. I used all methods. Edited by Iron-Wolf, 03 January 2006 - 09:01 PM. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00493.warc.gz
CC-MAIN-2018-26
1,291
14