url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://ucarehq.com/blog/good-bie-old-friend/
code
Once upon a time, Internet Explorer was the greatest web browser in all the land. But how times have changed, IE is no longer the prettiest girl at the party. 😊 We’ll be putting Internet Explorer out to pasture (ending support) on June 1, 2019. Why the change? Basically, very few people still use Internet Explorer and it now has unpatched security vulnerabilities. Supporting IE takes a lot of our time so we will be ending support for IE so that we can focus on adding new features. The gritty details A security researcher notified Microsoft of a vulnerability on March 27 that allows hackers to steal information from your computer when using IE, at this point Microsoft has said they will fix this but they haven’t committed to a date. The more worrying aspect is that in just over eight months support for Windows 7 ends and so there will be no more security fixes for IE after January 14, 2020. Given this we’ve been reviewing usage of UCare and noticed a significant drop in usage of Internet Explorer (IE) over the last year. UCare only supports IE 11 as the only version Microsoft still supports. But usage of IE 11 has dropped this year. IE is now being used by less than 0.5% of people (1 in 200 people), if we look at just people who have UCare access the number drops even further too just over 0.2% (1 in 500 people). Here’s the real kicker though, we spend a significantly larger percentage of time testing UCare to ensure it works in IE, and even more time fixing issues that only affect IE. This is valuable time that could be going into simplifying UCare and creating new solutions that better fit our customer’s needs. For instance our engineering team is currently working on new functionality related to service planning and getting it to work in IE has been challenging and very time consuming. Because IE is ancient in internet land (over six years old) it also limits the technologies that we can use which means that some aspects of UCare can’t take advantage of the faster and simpler options that are available in modern web browsers. While still available; IE has actually been depreciated by Microsoft on Windows 10 and replaced with a newer browser called Edge. Which means that Windows 7 and 8.1 are the only operating systems that people are likely to be using IE on. The good news is that all of the people still using IE can move to Google Chrome or Microsoft Edge, an additional benefit is that Edge will soon be using the same underlying technology as Chrome (which is a cousin of the technology in Safari). This will cut down our testing even more so that we mainly need to test for Chrome, Safari and Firefox, giving our team more time to focus on adding value. We know this isn’t much time but we want to roll out new services related functions that IE is holding up, also given the small number of people still on IE we are hopeful that they can make the change without too much fuss. What to expect If you are using IE then you will soon start to see a notification that support for IE is ending. After June 1 this change will be permanent and UCare may no longer work for the few remaining people who are still using it. The notification will be permanent at this point. With these changes we’re working hard to make UCare easier to use, if you have any feedback we’d love to hear from you, simply email [email protected]. Also, if we can be of assistance with this transition please let us know, were here to help.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00060.warc.gz
CC-MAIN-2020-34
3,480
15
http://academy.dlink.in/forum/topics/hyper-terminals-for-linux?page=1&commentId=6411583%3AComment%3A38160&x=1
code
For people that manage hardware devices such as storage, routers and many more using Microsoft Windows, the term hyper terminal is a familiar thing. They use hyper terminal to connect to all the devices mentioned above using serial cable. But what if you have to manage all those devices using linux? The answer is, linux has 2 alternatives to hyper terminal; one is command line based and the other is GUI based. --> Let me start with the command line tool first. It is called minicom. You can install this tool using package manager of your linux machine. In fedora/redhat/centos: # yum install minicom Running it for the first time requires you to do some settings by running below command as root: # minicom -s Picture below shows the screen after command minicom -s This is where you set the baudrate, serial device you want to use etc. After finish with the setting, save it so that you do not have to do it every time. You can save it to default .dfl file, with the name of .minirc.dfl in your home folder, or you can specify the name and location yourselves. To change the saved setting, just use the above command back. The second tool is called cutecom, a graphical serial terminal. To install it on fedora,centos or redhat: # yum install cutecom It is easier to use since it has GUI. Picture below shows cutecom main screen, where you can set your device, parity, baudrate etc. Please verify baudrate !!!!! good to know that we can use it with linux as well..i would like to try this on linux in next D-Link session. thnk u sir for such valuable information...would like to get more information on this... ok this is perfact for our knowlede. if you are practice on this then you know deeply. yes ,akash this one useful for IPsec..address.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141745780.85/warc/CC-MAIN-20201204223450-20201205013450-00605.warc.gz
CC-MAIN-2020-50
1,750
17
https://www.vn.freelancer.com/projects/video-editing/want-video-editing-31522115/?ngsw-bypass=&w=f
code
Không tìm thấy công việc Rất tiếc chúng tôi không thể tìm thấy công việc mà bạn đang tìm kiếm. Tím thấy các công việc mới nhất ở đây: I’m looking for a developer who can implement a solution to convert xml(which has table structures in it) files to Json create cpanel for hosting on amazon AWS Data entry, sku, website, design Es ist ein Absperrventil in CAD 3D nach Vorgabe zu zeichnen. I am looking for 1-2 talented individuals to assist my team with the launch of a 1500 piece genesis collection. I need someone with blockchain experience that can develop a website with meta-mask functionality along with a token smart contract in the next 30 days. I have provided marketing services for 2 teams that have done a combined 9,000+ Ethereum in sales volume. My goal is to achieve ... Use the structural analysis/design (DFD) approach to describe a Data Flow Diagram methodology that accepts as input a set of k squares of different size, 10<k<20. It calculates the dimensions of these squares. Then the methodology calculates: (i) the average length of all squares, (ii) the minimum length, the maximum length of these squares, and (iii) stores these values in a file. Finally, ... Symfony expert needed Meanwhile, I've been developing the website in my local and I faced a bug in the Symfony project so I need help from Symfony Expert. If you are an expert, please demonstrate you are an expert with previous work Tha mencari prospects yang berkualiti call prospects untuk appointment seterusnya Hello I have a simple data entry job, must be free now Hi, I'm looking for someone to capture data, from a sportbook site, and write them in a csv file whose structure is well defined. Look instruction attached. Please, if you only bet expert in Python and BID ONLY IF SKILLED AND only if you can start immediately with a demo a few lines. *** The budget is indicated in the project, if you ask me in chat you don't do for me.*** Thank you, F...
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00077.warc.gz
CC-MAIN-2021-43
1,989
13
http://globalintegrity.org.dedi2560.your-server.de/about/what-we-believe/
code
Our vision is of a world in which people and organizations, in countries and communities across the globe, work together to improve governance, and solve complex social problems. Our mission is to help people and organizations solve complex social problems, by supporting locally-led innovation, learning, and adaptation. We believe that local leadership, learning, and action are essential elements in improving governance and solving complex social problems. Our values reflect this fundamental belief, guiding what we do and how we do it. We are: We are laser-focused on helping our partners solve the complex problems they care about. Governance principles are important, but matter most when they inform effective action to address problems that citizens care about. We recognize that we and our partners operate in complex, dynamic systems, and that we do not have all the answers. Local actors are the real experts as regards the governance and development challenges they face and are therefore the focal point of all our work. We recognize our limitations, and aim to learn from, and serve our local partners. We seek out and value diverse backgrounds, opinions, and perspectives when we collaborate with partners on particular projects and programs, in our recruitment of board members and staff, and in our own internal ways of working. As a relatively small, agile organization, we develop, test, and share innovative methodologies that can shape practice and improve outcomes across the entire sector. We communicate proactively, openly, and honestly about our challenges, successes, and lessons learned, and aim to listen more than we talk.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473524.88/warc/CC-MAIN-20240221170215-20240221200215-00880.warc.gz
CC-MAIN-2024-10
1,654
8
https://www.apiahf.org/staff/holly-tan/
code
Holly Tan (she/her) is the Associate Program Manager for APIAHF’s COVID-19 projects. In her role, she supports projects that provide COVID-19 education and vaccination resources for AANHPI communities. Prior to joining APIAHF, she worked at the California Department of Public Health on the CA COVID-19 Vaccination Program. Holly has always had a strong interest working with AANHPI communities and has spent time working with the NYU Center for the Study of Asian American Health. She holds a BS in public health from UC San Diego and an MPH specializing in community health from UCLA.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00693.warc.gz
CC-MAIN-2022-49
588
1
https://www.takethislife.com/depression/how-do-i-act-positive-new-132874/
code
I currently feel as if the main cause for my depression is my depression itself (I do sightly feel stupid calling it 'depression' because compared to what other people experience this is probably nothing). What I mean by this, is that I am an extremely negative person. I have always been like this, for as long as I can remember I have always felt that other people are better than me, I remember when i was 9 or so saying something like "The only thing I'm good at is being rubbish at everything" or something similarly stupid. So there is a cause to my negativity. However I absolutely HATE being a 'negative' or 'quiet' person. When someone jokingly mentions how quiet I am they are pretty much saying "You are what you hate, you are a failure". So being 'depressed' is sort of part of my character, and it is really holding me back in life. I REALLY want a girlfriend, just someone who actually cares about me that I can talk about too, but obviously no girl wants to be with a miserable, negative person such as myself. I'm not that bad looking but I guess I'm seen as unattractive because of this. I look around at people my age and I am the ONLY unhappy person. People seem to think I'm a bad/weak person because of how I am. The stereotype of 'depressed teenager' seems to only apply to me, and no one is understanding or caring. So, going back to my initial question, how do I pretend to be happy? I think what I need to do is make myself feel like a normal person, instead of being trapped in this pit of bitterness and failure that is getting deeper and deeper. Maybe if I can pretend to be confident I can get more friends, maybe even a girlfriend, and this will solve everything? I have always resented the need for people to become part of society to feel safe and happy, but maybe that is just my loneliness talking? I have this constant feeling that my 'depression' is just some stupid thing I have made up for myself (particularly going through the threads on here) so I don't have to face the fact that I am an extremely lazy, socially inept person. I don't know. I'm getting really pissed of with myself and how I am. Thoughts? P.S. There is quite a bit else I would like to talk about but I thought I should keep this relatively focused.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076819.36/warc/CC-MAIN-20210414034544-20210414064544-00440.warc.gz
CC-MAIN-2021-17
2,258
3
https://community.esri.com/t5/geoprocessing-questions/how-to-maintain-distance-sde-features-class-wgs84/td-p/421673
code
in order to utilize the ESRI basemaps in our Server site we have moved our point and polygon (buffers of points) SDE feature classes over to WGS_1984_Web_Mercator_Auxiliary_Sphere from NAD 83 Albers The problem we are running into is how to adjust our model (this model takes xy data and plots it as well as geocodes address data then puts buffers around the points and appends to SDE feature class. Our site then utilizes map services which include both point and buffer SDE features) so that we maintain the proper distances of our radius rings. I understand that the Mercator projection is not useful for the buffer process which is why our radii are not measuring correctly when checked with google earth or another map software. They draw as circles but do not maintain distance meaurements. Our points are across the continental US. The xy data is in NAD83. The general workflow is as follows. Plot xy NAD83 > project to WGS 84 >append to SDE. ::: Take the plotted xy data output > run buffer tool (distances in meters) > project buffer outputs to WGS84 > append buffers to SDE. However, the output from this workflow is still not showing the proper distances. They are drawing as circles, but the distance is not maintained. Does anyone have insight into what steps we are missing? Should we be using different transformations or distances in our buffer tools? Seems pretty straight forward but we are running out of ideas. :::In Summary::: Need to utilize ESRI basemaps but also need to display radius rings derived from NAD83 data in the case of xy table and WGS84 data when using the geocoding tool. How can we properly buffer these outputs and append back to our SDE Feature Class??!
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.13/warc/CC-MAIN-20231206031946-20231206061946-00371.warc.gz
CC-MAIN-2023-50
1,694
5
https://angelnewsnetwork.blogspot.com/2022/12/after-death.html
code
Bringing the wisdom and teachings of the divine realms to humanity. HOW POWERFUL WE ARE After death we shall experience exactly what we expect and choose to experience. If you need hell, it will be there. If we need heaven that too. So, what do you choose being the creator creating all the time?
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476399.55/warc/CC-MAIN-20240303210414-20240304000414-00287.warc.gz
CC-MAIN-2024-10
296
5
https://lists.debian.org/debian-firewall/2004/10/msg00028.html
code
On Thursday 07 October 2004 14:30, maarten wrote: > Can you elaborate more on the network setup ? I'm confused; if the > machines are on the same subnet, you can't prevent them from talking to > each other directly. Sure, they're not on the same subnet (unless you see the internet as one subnet). They are both on public IPs - but for reasons that were not disclosed to me, they're not allowed to talk to each other directly, even though they can (I know, I'm itching to find out why too!). Hence the box in the middle, which has two public IPs, so its basically just getting the box in the middle to act as a proxy of sorts between the other two (I do not know what the software is either - if it's something like Postfix there would be much cleaner ways to do this, but I don't know) without them knowing. > If they're not, there is a router somewhere, and > adding your box will certainly complicate the setup, both for you and for > the router-person. Also, does your box in the middle have one or two NICs ? Two, each with a public IP. I do not have the IPs yet, I'll only be given that when I'm taken to the server room (no idea why) but I need to know that I can do this before I waste my and the client's time going there. > The thing is, if you just bridge everything, there is little use, is there. > The real question is _why_ do you need the box in the middle. If all it > should look like to the boxen is just thin air, I don't see what (legal) > purpose that box would serve. Is it for protection ? Monitoring ? I wish I knew. It's pretty senseless. > On that note: Do boxes A and C know that there is something in between them? No, they shouldn't - that's the idea. They are to believe that they are talking directly to one another, while they're actually talking to a linux box <evil grin :-> that's passing the message on, pretending to be the sender. I hope that makes sense Thanks for your replies Hans du Plooy Newington Consulting Services hansdp at newingtoncs dot co dot za
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804680.40/warc/CC-MAIN-20171118075712-20171118095712-00079.warc.gz
CC-MAIN-2017-47
1,997
33
https://huntr.dev/bounties/1-npm-bundle-phobia-cli/
code
OS Command Injection in adrieankhisbe/bundle-phobia-cli Sep 1st 2020 BundlePhobia is a tool to help you find the cost of adding a npm package to your bundle. It enables you to query package sizes. npm-utils.js has a unsanitized exec function which leads to Arbitrary code execution const util = require('./npm-utils.js'); let a = util.getVersionList(';touch HACKED &&'); console.log(a); to join this conversation
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00409.warc.gz
CC-MAIN-2023-14
412
8
https://tgdaily.com/technology/trending/computer-learns-human-language-to-teach-itself-to-play-games/
code
MIT’s developed a machine-learning system that allows a computer to read the instructions for playing Civilization – in one of several different languages – and improve its game. It’s worth noting that game manuals don’t give specific instructions for winning – just very general advice. However, once the computer was given the manual, its rate of victory jumped from 46 percent to 79 percent. “Games are used as a test bed for artificial-intelligence techniques simply because of their complexity,” says SRK Branavan of University College London. “Every action that you take in the game doesn’t have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways.” The system begins with virtually no prior knowledge about either the task or the language in which the instructions are written. It has a list of actions it can take – like right-clicks, left-clicks or moving the cursor. It also has access to the information displayed on screen, and some way of gauging its success. But it doesn’t know what actions correspond to what words in the instruction set, and it doesn’t know what the objects in the game world represent. Initially, then, its behavior is almost totally random. But as it takes various actions, different words appear on screen, and it can look for instances of those words in the instruction set. It can also search the surrounding text for associated words, and develop hypotheses about what actions those words correspond to. Hypotheses that consistently lead to good results are given greater credence, while those that consistently lead to bad results are discarded. In the case of software installation, the system was able to reproduce 80 percent of the steps that a human reading the same instructions would execute. In the case of the computer game, it won 79 percent of the games it played, while a version that didn’t rely on the written instructions won only 46 percent. “If you’d asked me beforehand if I thought we could do this yet, I’d have said no,” says Eugene Charniak, University Professor of Computer Science at Brown University. “You are building something where you have very little information about the domain, but you get clues from the domain itself.” Most complex computer games include algorithms that allow players to play against the computer, rather than against other people – meaning that programmers have to develop strategies for the computer to follow and then write the code that executes them. Branavan says the MIT system could make that job much easier, automatically creating better-performing algorithms.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00789.warc.gz
CC-MAIN-2023-50
2,748
10
http://7drl.org/2013/03/18/7drl-success-uushuvud/
code
A rough Windows only version of Uushuvud is complete. You can download it here. I hope, I hope, I hope that it will run on your machine. In Uushuvud, you have mysteriously been transported into another world called Uushuvud, where your head has mysteriously been transformed into an ‘@’ symbol. You must attempt to survive long enough to find out what’s going on and how you might be able to get your head back to normal and get home. Keypad/Arrow Keys: Move or attack adjacent enemies P: Pick up items R: Read books H: Use health potion (restores 20 HP) T: Use teleport potion (randomly teleports you somewhere within a ~60 cell x ~60 cell square area around your current location) E: Use explode potion (hits all enemies within 5 cells of your location with fire) X: End level (only valid in “end-of-level” temple (a big room with a small room inside with a lighter floor color)) Q: Quit (no confirmation dialog at all; this will immediately close the program) - Swords appear as black squares. I haven’t been able to figure out why. - When you die, the program instantly closes. There is no “You died! Sorry!” screen or anything. - The enemies get stuck on rocks and walls easily. - The game is a little hard. This is a feature, not a bug. It’s a roguelike. - Don’t accidentally press ‘Q’. You will lose your progress. - Pick up every potion in a level before travelling to the next one. This is especially true on early levels where the situation isn’t very dangerous. You will need those potions later. - The number printed before weapons and armor is the level of the item, not a quantity. I know it looks weird, but whatever… - If you pick up a weapon or armor, you will discard your current weapon or armor. It is gone forever, so don’t pick up equipment that’s worse than what you already have. - Try to avoid using explode potions on single enemies. The ability to attack multiple nearby enemies at once is valuable. Try to lure them together. - The enemies get stuck on rocks and walls easily. You can use this fact to help you escape when running low on HP and potions. - Beware of Gaums. - Spears and axes are generally the best weapons, followed by swords and maces, then clubs, and finally daggers. However, when choosing which weapon to use, also consider the weapon’s level. A 12 Dagger is better than a 1 Spear. - Armor from worst to best: Leather, Iron, Steel, Tempered Steel, Perfect Steel - Seriously, beware of Gaums. They are mean. - When you level up, your HP increases and is restored to its maximum. Watch your EXP and consider whether health potions might be saved until after you level up. - For that matter, keep an eye on your HP at all times. - Have I mentioned to beware of Gaums? - Good luck!
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937114.2/warc/CC-MAIN-20180420042340-20180420062340-00164.warc.gz
CC-MAIN-2018-17
2,759
28
https://softwareengineering.stackexchange.com/questions/151961/xaml-controls-in-winforms
code
We're considering converting our WinForms application to a WPF application. Part of the reason is that WPF/XAML seem to be the future. We are also using third party controls that we would like to be able to phase out. Making this conversion seems like a pretty big and time consuming undertaking, though. Would it make sense to develop XAML controls that could be used in our WinForms application as a first step in the process? My thinking is that the same controls would then be used in the WPF application and all of the look, feel, and functionality would be built into the controls in either environment. Have you considered doing it the other way round? If your end goal is to convert to WPF completely, you could start by setting up WPF application which hosts your WinForms control. Then you could replace WinForms controls one by one over time. WPF can be quite different from WinForms and you might want to tackle some architectural issues first such as navigation. The preferred pattern for building WPF apps is Model-View-ViewModel. I suggest doing some prototyping to get to know it if you aren't familiar with it.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491491.18/warc/CC-MAIN-20191206222837-20191207010837-00390.warc.gz
CC-MAIN-2019-51
1,127
3
https://forum.babylonjs.com/t/is-there-a-way-to-set-a-minimum-panning-distance/37239
code
In the ArcRotateCamera example, the scroll-wheel pans in and out. We can set the maximum distance using panningDistanceLimit, but can we set a minimum distance? As it stands, we are allowed to pan (actually zoom) in towards to center… and PASS it, now looking at the object from the other side. We should be able to limit the user from panning in that far. Looks like what you need is this ArcRotateCamera | Babylon.js Documentation Are you saying this feature is implemented and documented? I’ve looked through the documentation and found nothing serving this end. I mean the first item in the link. lowerRadiusLimit Thank you, musk! Sorry I missed what you linked! Yes, that did the trick! I feel the need to suggest the lowerRadiusLimit and upperRadiusLimit are set to some default value. Really, is there any case where the behavior we actually want is to be able to zoom THROUGH the mesh and come out the other side?
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00366.warc.gz
CC-MAIN-2023-06
925
6
https://offtopic.com/threads/fat32.3703702/
code
I have an unformatted external Hard drive and need to load it with fat32 so I can copy from both mac and pcs. I dont give a shit about the single file 4gb limit. I cannot figure out how to get my pc to format this thing with fat32. any suggestions? until i do I cannot connect to any of my friends mac's copy some music files Help!
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687333.74/warc/CC-MAIN-20170920161029-20170920181029-00181.warc.gz
CC-MAIN-2017-39
331
1
http://forums.zimbra.com/migration/27578-real-world-exchange-2003-bes-migration.html
code
I'm currently in the early phases of planning a Zimbra test environment. I want to get some heads up about what I am really in for before I get started. My current email environment consists of 3 Exchange 2003 servers (1 front-end, 2 back-end) and a Blackberry Enterprise Server (4.1) hosting about 40 devices. My users make extensive use of shared calendars, and send-as functionality in Outlook. I have some questions about permissions migration to Zimbra. Does the Zimbra migration tool, or the PST import method preserve the permissions or do they need to be manually recreated? What is the preferred method for synchronizing account creation/deletion activities between Active Directory and Zimbra? Has anyone out there done a successful Zimbra implemention using BES? How does it work? What does RIM think of Zimbra using the Exchange connector? Do they support it when problems arise? I'm under the impression that Zimbra is written in Java and because of that is a rather hardware intensive application. I'm currently hosting 150 Exchange users on an HP Proliant ML350 (P3 Xeon processor at 3ghz with 4GB of RAM). What are some real world hardware specs for hosting 150 Zimbra users? How does the Zimbra disaster recovery model work? I'm currently using Backup Exec with their Exchange agent and it works great. It gives me individual mailbox backups so that I can restore a single message into one user's mailbox if necessary. Does Zimbra offer that functionality, or do I have to restore the entire mailstore? Has anyone had to do a bare metal restore on a Zimbra server? What was that like?
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281876.4/warc/CC-MAIN-20160524002121-00238-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
1,601
7
http://mjtsai.com/tweetnest/2009/05/26
code
NewEgg never sent me the alert that I'd signed up for, though. I found out after checking manually. Once again, Dell has delayed my Seagate Momentus 7200.4 order, but now NewEgg has it in stock: http://tinyurl.com/cphlpw Just noticed that you can invoke gitx from Terminal with many of the same pruning options as git itself. Trying to figure out why Safari 4's WebKit is lying to my app about being done loading.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118713.1/warc/CC-MAIN-20170423031158-00386-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
413
4
http://sosmath.com/calculus/limcon/limcon06/limcon06.html
code
Your teacher probably told you that you can draw the graph of a continuous function without lifting your pencil off the paper. This is made precise by the following result: Intermediate Value Theorem. Let f (x) be a continuous function on the interval [a, b]. If d [f (a), f (b)], then there is a c [a, b] such that f (c) = d. In the case where f (a) > f (b), [f (a), f (b)] is meant to be the same as [f (b), f (a)]. Another way to state the Intermediate Value Theorem is to say that the image of a closed interval under a continuous function is a closed interval. We will present an outline of the proof of the Intermediate Value Theorem on the next page. Here is a classical consequence of the Intermediate Value Theorem: Example. Every polynomial of odd degree has at least one real root. We want to show that if P(x) = anxn + an - 1xn - 1 + ... + a1x + a0 is a polynomial with n odd and an 0, then there is a real number c, such that P(c) = 0. First let me remind you that it follows from the results in previous pages that every polynomial is continuous on the real line. There you also learned that Consequently for | x| large enough, P(x) and anxn have the same sign. But anxn has opposite signs for positive x and negative x. Thus it follows that if an > 0, there are real numbers x0 < x1 such that P(x0) < 0 and P(x1) > 0. Similarly if an < 0, we can find x0 < x1 such that P(x0) > 0 and P(x1) < 0. In either case, it now follows directly from the Intermediate Value Theorem that (for d = 0) there is a real number c [x0, x1] with P(c) = 0. The natural question arises whether every function which satisfies the conclusion of the Intermediate Value Theorem must be continuous. Unfortunately, the answer is no and counterexamples are quite messy. The easiest counterexample is the function Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard. Mohamed A. Khamsi
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506027.39/warc/CC-MAIN-20230921105806-20230921135806-00520.warc.gz
CC-MAIN-2023-40
1,903
11
http://dylex.blogspot.com/2011/01/
code
Dark Matter is the fundamental part of the universe on which our real ( seen ) universe is built. It consists of these three ( 3 ) parts. 1. The number 3. Most things can be broken down into 3 parts. For instance you and I consist of a frame ( skeleton ), a string distribution network ( veins ) and a hardware / software communication system ( brain / how to operate ( for instance arms ) and neuron connectors for signals. 2. Energy - The number 3 divided by 2 ( 3 / 2 = 1.5 ) is the common battery voltage. There is also a 9 volt battery which is really the number 3 multiplied by itself ( 3 X 3 ). The number 3 can be subdivided into the numbers ( 1, 2, 3 ). The number 1 represents the whole or completeness. The number 2 represents 2 choices ( yes or no ). The fraction of 2 ( ½ ) represents chance when nothing else is known or the word "maybe". The number 3 as a fraction is ( 1/3rd ). 1/3rd , for example, goes to the wholesaler, 1/3rd profit to the retailer, and 1/3rd for expenses. 3. Force - There is an outward force which is really an inflexible string. An inwardly directed string force which is unbreakable ( nuclear ) and a breakable chemistry force. If a force is still and is also flexed you have gravity. If a force has motion you have velocity and acceleration. A still force initially has resistance to movement which is inertia.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590901.10/warc/CC-MAIN-20180719125339-20180719145339-00608.warc.gz
CC-MAIN-2018-30
1,352
4
https://handlewithscare.libsyn.com/33-cube
code
Nov 11, 2021 Totemlydrunk and Holly Hooch are fighting for this survival this month as we deep dive into thanatophobia (fear of death) flicks. This week we take a look at the low budget hit Cube (1997) where six strangers look to avoid deadly trap rooms... and each other. Credit for our STELLAR synthwave tracks goes to David Aselle! Be sure to join our Twisted Crew on discord and tag along for our watch parties every Tuesday night at 7:30pm PT on Kast! Instagram: *NEW* https://www.instagram.com/handlewithscarepod/ YouTube *NEW* (Please subscribe - new videos will be posted weekly) https://www.youtube.com/channel/UC97_R3CAN4Vwr57qhlRx1Cw
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00193.warc.gz
CC-MAIN-2022-05
644
6
http://jdebp.uk/FGA/dns-edns0-and-firewalls.html
code
You've come to this page because you've asked a question similar to the following: My mail system cannot send mail to mailboxes in certain domains such as aol.com., hp.com., yahoo.com., earthlink.net., and sbcglobal.net.. I've discovered from reading its log that this is because my SMTP Relay client is unable to look up the MX resource record set for those domains, which is in turn because my resolving proxy DNS server receives no responses to its back-end MX queries sent to the content DNS servers for those domains. Why is this and how can I cure it ? This is the Frequently Given Answer to that question. Your firewall is preventing you from using EDNS0. You need to fix or to replace your firewall. Several DNS server softwares, most notably Microsoft's DNS server in Windows NT Server 2003 and later and ISC's BIND, support EDNS0, a mechanism (described in RFC 2671) for extending DNS query and response datagrams. One of the capabilities of EDNS0 allows DNS clients and servers to inform one another of whether they are capable of handling DNS/UDP datagrams that are larger than the original maximum size of 512 octets. Employing DNS/UDP datagram sizes greater than 512 octets is useful, since it avoids the setup/teardown costs, and the denial-of-service risk, of DNS/TCP, which would otherwise have to be used wherever a DNS response does not fit into a 512 octet DNS/UDP datagram. Microsoft's DNS server and ISC's BIND allow the content DNS servers that they talk to (i.e. that their back-ends send queries to and receive responses from) to employ large DNS/UDP datagram sizes in their responses, if they are capable of doing so. They allow this by advertising, in the queries that they send, that they support large DNS/UDP datagrams up to a specific maximum size. Similarly, the aol.com., msn.com., and sbcglobal.net. content DNS servers (amongst others) will employ large DNS/UDP datagram sizes for their responses whenever the entities querying them advertise that they are capable of handling such large responses. As such, in normal operation Microsoft's DNS server or ISC's BIND sends a back-end query to (say) the aol.com. content DNS servers, advertising that it supports large DNS/UDP response datagrams, and the aol.com. content DNS servers take advantage of this, sending back DNS/UDP responses larger than 512 octets wherever applicable. (At the time of writing this answer, in the case of MX back-end queries, the aol.com. content DNS servers were sending back DNS/UDP responses that are 550 octets long to any client that advertised via EDNS0 its ability to handle DNS/UDP response datagrams of such a size.) However, some firewalls are hardwired to expect that DNS/UDP datagrams will always be at most 512 octets long, an expectation that is incorrect, and will simply discard any DNS/UDP datagrams that are longer. If such a firewall is interposed between one's own resolving proxy DNS server and the content DNS servers on the rest of Internet, the responses from any content DNS servers that recognise the resolving proxy DNS servers' advertisement of large DNS/UDP datagram capability, and that proceed to make use of that capability in their responses, will be discarded by the firewall. In effect, one's resolving proxy DNS server will never receive any responses from those particular content DNS servers. To it, the content DNS servers will simply not be responding. As such, query resolution will fail as the back-end queries time out without receiving responses. The service fix for this problem involves reconfiguring, correcting, or replacing your firewall. Whether your firewall can be reconfigured or corrected to handle DNS/UDP datagrams larger that 512 octets will vary depending from what make and model firewall, from what manufacturer, it is. The possibilities are too many to comprehensively deal with here. Consult your firewall's operation manual and the entity that sold your firewall to you for details. For Cisco PIX firewalls version 6.3(2) and later it is merely necessary to reconfigure the firewalls with replacing "4096" with whatever maximum DNS/UDP length one's resolving proxy DNS server software actually uses.fixup protocol dns maximum-length 4096 The local fix for this problem is to turn off the advertisement of large DNS/UDP datagram size capability that your resolving proxy DNS server makes in the back-end queries that it sends. The content DNS servers will assume that your resolving proxy DNS server is not capable of handling DNS/UDP datagrams larger than 512 octets, and will trim their responses to fit into 512 octets if the elimination of helpful-but-not-strictly-necessary data allows them to do that. Of course, applying this local fix forces the use of DNS/TCP for those case that responses cannot be trimmed to fit into a 512 octet DNS/UDP datagram; where, had support for large DNS/UDP datagram sizes been advertised via EDNS0, such queries would have been handled via DNS/UDP without having to fall back to additional transactions via DNS/TCP. To apply this local fix in the case of The Internet Utilities, disable all use of EDNS0/UDP in dnsrcpd and dnsfcpd by invoking them with the /LARGEUDP- command-line option. To apply this local fix in the case of Microsoft's DNS server, set the maximum DNS/UDP datagram size that the server will advertise itself as capable of using (in the back-end queries that it sends to content DNS servers) to 512 octets, in the documented manner. (Microsoft's KnowledgeBase article #828263 says to disable all use of EDNS0 entirely. This is not, strictly, necessary unless the firewall is inspecting, and possibly manipulating, the actual content of DNS/UDP datagrams, rather than just their lengths.) To apply this local fix in the case of ISC's BIND version 9.3 or later, set the maximum DNS/UDP datagram size that the server will advertise itself as capable of using (in the back-end queries that it sends to content DNS servers) to 512 octets, via the edns-udp-size option in named.conf.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474669.36/warc/CC-MAIN-20240226225941-20240227015941-00639.warc.gz
CC-MAIN-2024-10
6,009
19
https://www.mail-archive.com/[email protected]/msg19526.html
code
Kenneth Marshall <[EMAIL PROTECTED]> writes: > Dear PostgreSQL Developers, > This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1. It's pretty obvious that this patch hasn't even been tested on a > + #ifndef WORS_BIGENDIAN However, why do we need two code paths anyway? I don't think there's any requirement for the hash values to come out the same on little- and big-endian machines. In common cases the byte-array data being presented to the hash function would be different to start with, so you could hardly expect identical hash results even if you had separate I don't find anything very compelling about 64-bit hashing, either. We couldn't move to that without breaking API for hash functions of user-defined types. Given all the other problems with hash indexes, the issue of whether it's useful to have more than 2^32 hash buckets seems very far off indeed. regards, tom lane Sent via pgsql-patches mailing list ([email protected]) To make changes to your subscription:
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864798.12/warc/CC-MAIN-20180522151159-20180522171159-00237.warc.gz
CC-MAIN-2018-22
1,012
18
http://bigdata.sys-con.com/node/2560728
code
|By Application Security|| |March 2, 2013 09:00 AM EST|| It sounds like a parlor trick, but one of the benefits of API centric de-facto standards such as REST and JSON is they allow relatively seamless communication between software systems. This makes it possible to combine technologies to instantly bring out new capabilities. In particular I want to talk about how an API Gateway can improve the security posture of a Hadoop installation without having to actually modify Hadoop itself. Sounds too good to be true? Read on. Hadoop and RESTful APIs Hadoop is mostly a behind the firewall affair, and APIs are generally used for exposing data or capabilities for other systems, users or mobile devices. In the case of Hadoop there are three main RESTful APIs to talk about. This list isn’t exhaustive but it covers the main APIs. - WebHDFS – Offers complete control over files and directories in HDFS - HBase REST API – Offers access to insert, create, delete, single/multiple cell values - HCatalog REST API – Provides job control for Map/Reduce, Pig and Hive as well as to access and manipulate HCatalog DDL data These APIs are very useful because anyone with an HTTP client can potentially manipulate data in Hadoop. This, of course, is like using a knife all-blade – it’s very easy to cut yourself. To take an example, WebHDFS allows RESTful calls for directory listings, creating new directories and files, as well as file deletion. Worse, the default security model requires nothing more than inserting “root” into the HTTP call. To its credit, most distributions of Hadoop also offer Kerberos SPNEGO authentication, but additional work is needed to support other types of authentication and authorization schemes, and not all REST calls that expose sensitive data (such as a list of files) are secured. Here are some of the other challenges: - Fragmented Enforcement – Some REST calls leak information and require no credentials - Developer Centric Interfaces – Full Java stack traces are passed back to callers, leaking system details - Resource Protection – The Namenode is a single point of failure and excessive WebHDFS activity may threaten the cluster - Consistent Security Policy – All APIs in Hadoop must be independently configured, managed and audited over time This list is just a start, and to be fair, Hadoop is still evolving. We expect things to get better over time, but for Enterprises to unlock value from their “Big Data” projects now, they can’t afford to wait until security is perfect. One model used in other domains is an API Gateway or proxy that sits between the Hadoop cluster and the client. Using this model, the cluster only trusts calls from the gateway and all potential API callers are forced to use the gateway. Further, the gateway capabilities are rich enough and expressive enough to perform the full depth and breadth of security for REST calls from authentication to message level security, tokenization, throttling, denial of service protection, attack protection and data translation. Even better, this provides a safe and effective way to expose Hadoop to mobile devices without worrying about performance, scalability and security. Here is the conceptual picture: Intel Expressway API Manager and Intel Distribution of Apache Hadoop In the previous diagram we are showing the Intel(R) Expressway API Manager acting as a proxy for WebHDFS, HBase and HCatalog APIs exposed from Intel’s Hadoop distribution. API Manager exposes RESTful APIs and also provides an out of the box subscription to Mashery to help evangelize APIs among a community of developers. All of the policy enforcement is done at the HTTP layer by the gateway and the security administrator is free to rewrite the API to be more user friendly to the caller and the gateway will take care of mapping and rewriting the REST call to the format supported by Hadoop. In short, this model lets you provide instant Enterprise security for a good chunk of Hadoop capabilities without having to add a plug-in, additional code or a special distribution of Hadoop. So… just what can you do without touching Hadoop? To take WebHDFS as an example the following is possible with some configuration on the gateway itself: - A gateway can lock-down the standard WebHDFS REST API and allow access only for specific users based on an Enterprise identity that may be stored in LDAP, Active Directory, Oracle, Siteminder, IBM or Relational Databases. - A gateway provides additional authentication methods such as X.509 certificates with CRL and OCSP checking, OAuth token handling, API keys support, WS-Security and SSL termination & acceleration for WebHDFS API calls. The gateway can expose secure versions of the WebDHFS API for external access - A gateway can improve on the security model used by WebHDFS which carries identities in HTTP query parameters, which are more susceptible to credential leakage compared to a security model based on HTTP headers. The gateway can expose a variant of the WebHDFS API that expects credentials in the HTTP header and seamlessly maps this to the WebHDFS internal format - The gateway workflow engine can maps a single function REST call into multiple WebHDFS calls. For example, the WebHDFS REST API requires two separate HTTP calls for file creation and file upload. The gateway can expose a single API for this that handles the sequential execution and error handling, exposing a single function to the user - The gateway can strip and redact Java exception traces carried in the WebHDFS REST API responses ( for instance, JSON responses may carry org.apache.hadoop.security.AccessControlException.* which can spill details beneficial to an attacker - The gateway can throttle and rate shape WebHDFS REST requests which can protect the Hadoop cluster from resource consumption from excessive HDFS writes, open file handles and excessive create, read, update and delete operations which might impact a running job. This list is just the start, API manager can also perform selective encryption and data protection (such as PCI tokenization or PII format preserving encryption) on data as it is inserted or deleted from the Hadoop cluster, all by sitting in-between the caller and the cluster. So the parlor trick here is really moving the problem from trying to secure hadoop from the inside out to moving and centralizing security to the enforcement point. If you are looking for a way to expose “Big Data” outside the cluster, an the API Gateway model may be worth some investigation! The post How to secure Hadoop without touching it – combining API Security and Hadoop appeared first on Security Gateways@Intel. One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understand... Mar. 4, 2015 05:00 AM EST Reads: 2,656 The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing ... Mar. 4, 2015 04:00 AM EST Reads: 1,098 The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., showed what is needed to leverage the IoT to transform your business. ... Mar. 4, 2015 02:45 AM EST Reads: 3,822 Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS soluti... Mar. 4, 2015 02:30 AM EST Reads: 1,185 The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impac... Mar. 4, 2015 02:00 AM EST Reads: 3,127 In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C... Mar. 4, 2015 01:00 AM EST Reads: 4,424 Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize sup... Mar. 4, 2015 12:30 AM EST Reads: 3,553 Even as cloud and managed services grow increasingly central to business strategy and performance, challenges remain. The biggest sticking point for companies seeking to capitalize on the cloud is data security. Keeping data safe is an issue in any computing environment, and it has been a focus since the earliest days of the cloud revolution. Understandably so: a lot can go wrong when you allow valuable information to live outside the firewall. Recent revelations about government snooping, along... Mar. 3, 2015 11:15 PM EST Reads: 759 SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures... Mar. 3, 2015 09:15 PM EST Reads: 896 As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will disc... Mar. 3, 2015 05:00 PM EST Reads: 968 Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance... Mar. 3, 2015 04:15 PM EST Reads: 931 Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been ... Mar. 3, 2015 04:00 PM EST Reads: 1,446 There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. In his session at 15th Cloud Expo, Michael Meiner, an Engineering Director at Oracle, Corporation, will analyze a range of cloud offerings (IaaS, PaaS, SaaS) and discuss the benefits/challenges of migrating to each of... Mar. 3, 2015 04:00 PM EST Reads: 911 The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focu... Mar. 3, 2015 03:15 PM EST Reads: 1,522 Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing... Mar. 3, 2015 02:00 PM EST Reads: 1,482 SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes ... Mar. 3, 2015 01:45 PM EST Reads: 1,371 Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch ... Mar. 3, 2015 01:00 PM EST Reads: 1,004 The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @Things... Mar. 3, 2015 12:00 PM EST Reads: 1,457 Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of bein... Mar. 3, 2015 12:00 PM EST Reads: 2,690 SYS-CON Media announced that IBM, which offers the world’s deepest portfolio of technologies and expertise that are transforming the future of work, has launched ad campaigns on SYS-CON’s numerous online magazines such as Cloud Computing Journal, Virtualization Journal, SOA World Magazine, and IoT Journal. IBM’s campaigns focus on vendors in the technology marketplace, the future of testing, Big Data and analytics, and mobile platforms. Mar. 3, 2015 11:00 AM EST Reads: 1,087
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463485.78/warc/CC-MAIN-20150226074103-00024-ip-10-28-5-156.ec2.internal.warc.gz
CC-MAIN-2015-11
17,497
68
http://zanedqaku.diowebhost.com/6010772/the-fact-about-pay-me-to-do-your-project-that-no-one-is-suggesting
code
By submitting your title, you grant us authorization to publish it with your letter. We won't ever publish your electronic mail. You must fill out all fields to submit a letter. Type of Letter: ✱ In selected unexpected emergency or mission crucial predicaments, an company may possibly apply an yearly top quality pay out cap instead of a biweekly premium fork out cap, subject to the situations offered in regulation and regulation. Question inquiries. A great way to engage in class should be to check with inquiries. When you've got a matter about one thing You do not have an understanding of or the Trainer claimed anything and you simply need to know more about it, raise your hand and talk to. Even just focusing enough to pay attention for factors you might want to talk to questions about will help you pay back far more interest. You should learn, you ought to pay attention to your Instructor, and you want to soak up all of that details at school; nonetheless it's just so...tedious! It is really challenging to target what Avogadro's Number is when you'd Significantly instead think about looking at that sweet classmate later on, but having a couple of psychological and Actual physical tips, you'll be able to pay attention in school. thoughts as well slender. You have to get started with wide relational queries. An excellent problem: Do Grownup learners in a rural Grownup training environment have qualities Do I spend just about every waking instant partaking with my Young children? No – significantly from it. I have confidence in the worth of instructing a child to acquire unbiased play at the same time. discovered that math-nervous moms and dads who assistance little ones on homework breed math-anxious small children – gurus say there are plenty of tactics you may try out that don’t demand relearning arithmetic. Start off engaged on it now. It is a great deal easier to think of causes to try and do other matters, and stay away from carrying out your homework. But for those who battle to finish and discover the time to accomplish your homework regularly, this sort of procrastination might be to blame. The easiest way to steal overtime for your homework? $619 really worth of digital publications Get ebooks collectively valued at up to $619. Pay out what you wish Identify your cost of $one or more and boost your contribution to improve your bundle. DRM-no cost Obtain these guides onto your preferred looking at product to peruse any place, anytime. that can help emphasis focus. Pursuing this structured presentation the committee begins to ask questions, The bus may be distracting, or it may be an awesome resource. As it's brimming with your classmates, check out to obtain other college students to work with you and acquire matters accomplished far more promptly. Does reading through for English take the longest? Get started with by far the most demanding homework to provide yourself quite possibly the most time to accomplish it, then proceed on the less complicated jobs it is possible to entire additional swiftly. Would not it's excellent if there have been a bunch of theses/dissertations available for reading right on the net? Properly, there are a few resources you need to be mindful of that can dig this Permit the thing is just what the concluded product or service could seem like. You could often buy a duplicate of most US dissertations/theses. " problems might be a practical launch, regardless of whether they can't pretty clue you into the appropriate approach. Some mother and father don't always understand how to assist with your homework and could possibly end up carrying out a lot of. Attempt to help keep yourself genuine. Asking for enable doesn't suggest inquiring your mum or dad to accomplish your give you the results you want.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746528.84/warc/CC-MAIN-20181120171153-20181120193153-00098.warc.gz
CC-MAIN-2018-47
3,810
14
https://cripsa.com/password-mfa-docs
code
This document talks about the process through which User Sign-Up and register itself in User database during first login attempt. Before this the Client/Development team must have to perform the following tasks: - 1. Create a project/select (an existing) project by logging into https://cripsa.com - 2. Use the link got in the above tasks to call the Password-MFA login screen in the home page of the customer. The login prompt would be look like something like the following: Figure 1: Password-MFA Login Screen Create Project through Cripsa Login to Cripsa Dashboard by using email account Once logged in Create project for OIDC. Fill all the details. All the fields are required. Click on “Create Project”. Now click on Continue or Register App. Registering App with Cripsa Now go back to https://cripsa.com/password-mfa-register-app ->Select your project you have just created. Here Three Fields are Mandatory to fill: - Project Name - Register Type In the Register Type there are two options, and one has to select anyone of them as per your requirement. For more information on these options please see FAQ. Now the client has to use the “buttonCodeCallbackURL” URL in their home page to bring the Password-MFA login prompt for the end user. User Registration through Sign-up Access the code URL and click on Sign up Now go to the e-mail ID and note down the verification code. Now install Authenticator App (Google Authenticator or Microsoft Authenticator ) and scan the code to get the verification code. Now the user is registered. Now User can Sign-in using Authenticator Code and Password. User Login Testing Using code URI to get Password-MFA login prompt. Use the Authenticator App to get the Passcode. The above screen is just an example. After putting the code click on Sign in. Once successful Login the Application will return Code. Similarly, if one can use “ButtonTokenCallbackURL” then it will return the Tokens (Access token and ID Token). The ID Token can be verified through jwt library and customer can login the user after verification only. Frequently asked questions 1. How many Registration Options available in Cripsa for OIDC and what is the difference between them? In the Register Type there are four options: - Registration with MFA - Registration with Only MFA Only the Login screen would be Different for each Registration Type. Here in the above diagram one can see MFA is available along with OIDC authentication. Here in the above diagram one can see MFA is available along with OIDC and AUTH 2.0 authentication.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817438.43/warc/CC-MAIN-20240419141145-20240419171145-00634.warc.gz
CC-MAIN-2024-18
2,562
39
https://www.curezone.org/forums/fm.asp?i=420343
code
i started using 3lac 3weeks ago... it seems to be working fine, but i am curious as to why it contains "refined yeast powder" the diet i am following (for candida) is Sugar free, yeast free, vinegar free, etc. can anyone tell me how this product is not in contractiction with the diet protocol ?
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00544.warc.gz
CC-MAIN-2021-04
295
7
https://forum.gethopscotch.com/t/how-do-i-make-a-clock/968
code
How can I make a clock? That does the code every second? Can any1 help me:question: So close! You need to put your set text inside a Repeat forever block, like this: I hope this helps! you almost did it! You forgot that between each new secondes, there is 1000 milisecondes. The code should look like this. Btw of you're planning on making a timer, here's the tutorial ilove this layout i want it back now just kidding i dont even use the app but i wouldnt have left if this layout still existed
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823674.34/warc/CC-MAIN-20181211172919-20181211194419-00139.warc.gz
CC-MAIN-2018-51
495
7
http://www.cruisemates.com/forum/788636-post5.html
code
Thanks to all of you for your replies and information. I was most concerned about kids and cameras being allowed in the area. A few men looking doesn't bother me too much if that is what they want to do. I did notice the replies to my question were from men. Are there any women out there that use this sundeck? I am really looking forward to my cruise!
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00521-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
353
4
https://searchsoftwarequality.techtarget.com/tip/Estimation-approaches-in-Agile-development
code
The way estimating is done in Agile environments is different from the way it’s done in traditional environments and it can take some time to get right. In this tip, we’ll look at how “points” are used to estimate development efforts and talk about some techniques Agile teams use. We’ll also take a look at how teams use story size, velocity and iterations so that ultimately they are able to estimate efforts accurately. Using story points Requirements for Agile teams are called stories. A story is a piece of work for the team to get done in an iteration. Iterations are usually two weeks long, but some teams have longer iterations and some have shorter ones. The goal of Agile estimation is to have just enough stories to work on that at the end of the iteration, every story is completely done ("done done"), and the new running, tested features can be released to production. Agile estimation is accomplished by assigning "points" to stories, and then figuring out how many points the team can accomplish in any given iteration. There are several ways to think about points. Finally, the number of points that a team can accomplish in an iteration is called "velocity." Points without time The classic, traditional approach to assessing the point value of stories is to do so without relating points to any sort of time value. In this approach, points are free-floating units that come to have particular meanings for particular teams. Some teams will consciously avoid any such association by calling their points something funny, like "jelly beans." Usually teams estimate points using Fibonacci values, so a very small story will be one point, a slightly larger one, two points, a slight larger one than that three points, then five points, eight points, 13 points, etc. Teams just starting out with Agile estimation always estimate badly. Usually for the first iteration the team will take on too much work. Having failed to ship all the stories in the first iteration, the team takes on too little work in the second iteration. This back-and-forth pattern of taking on too many points, then too few, oscillates for a few iterations until the team as a whole gets a sense of which stories are worth how many points, and also of how many points the team can reliably achieve. Over time, these values become quite consistent, and Agile teams using this way of estimating become knowledgeable and reliable as to how many points they can achieve for each iteration. Points based on time There are two other ways to think about points that I know of: "ideal hours" and a system based on half-days of work. A team that estimates using ideal hours thinks, "If we had no meetings, no appointments, no distractions at all, how many hours would it take to finish this story?" The team then tracks how many hours it actually takes to finish the story, and uses that value to measure their velocity. A typical team will have ideal hours estimates that are roughly half of real hours. So a story that is estimated at four hours will often actually take eight hours. It is extremely rare that an agile team, no matter how skilled and practiced, has a ratio of ideal hours to real hours of more than 3:4. That is, even the sharpest and best Agile teams will accomplish only about six hours worth of work in an eight-hour day. It is worth noting that ideal hours estimates are always wrong. This approach is useful because even though the estimates are always wrong, over time the team's estimates will be wrong in a very consistent way, and the ratio of ideal hours to real hours will be consistent enough to plan the stories for each iteration well. Another scheme that works well is to estimate stories based on half-days of work. A story that will take only a half day is one point, one that will take a full day is two points, two full days is four points, etc. This approach is similar to the ideal hours approach, but has the benefit of being slightly more accurate overall, but at the cost of being less precise. In general, it is better to have smaller stories than larger ones. Large stories are often called "epics" and Agile teams will typically decompose or "slice" larger stories into smaller ones. Mike Cohn of Mountain Goat software has a good example of this at http://www.mountaingoatsoftware.com/topics/user-stories. Cohn's example of an epic is "As a user, I can backup my entire hard drive." He slices this story into any number of smaller stories: "As a power user, I can specify files or folders to backup based on file size, date created, and date modified," "As a user, I can indicate folders not to backup so that my backup drive isn’t filled up with things I don’t need saved," and so on. The reason to have smaller stories is that larger stories are more difficult to estimate, and that makes it hard to predict a team's velocity. I have an example from my own experience that uses half-days, which makes the example easier to think about. On this team, we discovered over time that any story estimated at more than eight points, or four full days of work, was wildly unpredictable as to how long it would actually take to complete. Eventually the team instituted a rule that any story estimated at over eight points was required to be sliced into smaller stories, and those smaller stories estimated separately. That was a very good team. They had close control over their velocity, and the last I heard of them, they had shipped 35 of the past 38 two-week iterations, and the three they missed were due to things like holidays. I once knew someone who called himself an "iteration shepherd," and he had a saying that he repeated constantly: "Velocity is a measurement, not a goal." This means that a team that estimates honestly and always ships running, tested features, will know very closely how much work they can accomplish in an iteration. For example, say that such a team, after time, is confident that they can release 50 points worth of work every two weeks. If management or some other agent requires this team to release 60 points worth of work in an iteration, something will have to slip. The team will take on technical debt, or the features will have lower quality, and morale will suffer. There is a school of thought in the Agile community that believes that velocity can be increased in a systematic way. This is often called "hyper-velocity." In my opinion, this is a poor way to manage velocity. It does not stand to reason that a team can be forced somehow to increase their velocity indefinitely, or at least not without causing some aspects of the software or of team morale to suffer greatly. I strongly recommend readers to take to heart that motto: "Velocity is a measurement, not a goal." Agile teams estimate stories using points of some kind. Over time, the team learns exactly how many points they can finish in each iteration. That number is the team's velocity. Velocity should be an honest measurement of what the team can accomplish. This scheme is remarkably simple, but also remarkably efficient. However, it takes time and negotiation before a team will evolve to the point that they can be confident in their velocity. Teams just starting out will oscillate between taking on too many points in an iteration and taking on too few, but over the course of many iterations, those oscillations become smaller and smaller, until that team just starting out becomes a great Agile development team.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00146.warc.gz
CC-MAIN-2020-10
7,470
24
https://maxcoderz.org/forum/viewtopic.php?p=2957&sid=9fe56bfc4b0af326c8fb57144bda0c9f
code
Duck wrote:Thats correct. And I can understand that people want optimized games thus want to write their own routines, and for that purpose one needs to understand the grayscale principles and GPP's implementation. The paper would serve these needs. yeah it would be interesting to know how to do youre own grayscale routines, especially if youre game isnt going to be a GS game but just output a GS pic or something well good luck duck hey maybe u should release a zelda tutorial. Its the only Gs open source rpg right now. I was reading through the source i couldnt really understand much. The map routine is slightly modified. Well it would be cool if you could tell me what the - submaps are - what does charcyle, supermapx and all the variable stand for - and is the sprite size 8X8 Would be cool if u could explain the concept of the submaps The big 16x16 tiles in the game are actually composed of four 8*8 tiles. The tilemap stores the 'matrix' of the big tiles. The file "convert.asm" is used to look up which four 8x8 tiles the big tiles are composed of. So if '6' is the tile# of a 8x8 grass tile, and '7' is the tile# of a flower tile, the 'conversion data' of the big 16*16 tile with grass and flowers with tile# '1' would be stored in 'convert.asm' as: Using only 16x16 tiles would be extremely redundant. If i used complete 16x16 tiles, the (graphical) tileset would take a huge amount of space! Remember that a single 16x16 sprite takes 2*2*16=64 bytes of data! An 8x8 sprite takes 'only' 16 bytes of data. The original Gameboy Zelda also uses small 8x8 tiles to form complete 16x16 tiles. Many 8x8 tiles are used throughout many different 16x16 tiles, like grass or sand or wall etc, or used multiple times in the same 16x16 tile. This way, we got the best of two worlds: small tilemap size because we store them as 16x16 tiles; and small tileset (sprite) size because they graphical tile data is stored as combining 8x8 sprites that are each used mutiple times. So when Link enters a submap (which is actually one screen in the Gameboy Zelda), the 16x16 tiles are 'decompressed' into smaller 8x8 tiles into the buffer 'submap'. Again, this is exactly the way the original Zelda did it. Corbin also uses this; funny enough in that time I didnt know Zelda used it too. It could be even more space-efficient though. Why restrict the size of combined tiles to 16x16? It would be nice if they could be of variable size... 8x16, 16x8, 64x32.. you name it. When decompressing (entering a submap), the height+width+contents of each big tile could be looked up and 'decomposed' to small 8x8 units. (Like decomposing of 16x16 tile to 8x8 tiles in Zelda). The good thing of this is that multi-tile structures that are used multiple times in a game would only have to be stored once: as a very big combined tile. Applying this to every reocurring structure like grass field, forests, parts of houses etc, this is the ultimate way of reducing tilemap size. This has never been implemented though, but I would like to make a level editor with this technique once. This Zelda game is everything except optimized for speed, its written to be structured. Yes, it would be plenty faster to drawn 1 big 16x16 sprite instead of 4 8x8 sprites. Also, the game redraws the complete tilemap when scrolling. This could be much faster by drawing the map to a buffer and updating this buffer every time you scroll. There may be even faster methods. Plenty of room for speed optimizations.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653631.71/warc/CC-MAIN-20230607074914-20230607104914-00301.warc.gz
CC-MAIN-2023-23
3,479
17
https://lizards.opensuse.org/2015/10/31/proprietary-amdati-catalyst-fglrx-rpms-released-for-leap-42-1/
code
Since the last few months, we all know that the new openSUSE Leap 42.1 is on its road. But fglrx drivers were missing. Even with the 15.9 release in September. There’s really no warranties the drivers will work, for you! If you are satisfied with the open-source radeon drivers, don’t risk to break your computer All the trouble present in 15.9 will be there, like the failing gnome3 gdm start, see previous article from Sebastian, his scripts also available on the raw-src directory on the mirror, allow you to apply a quirk patch. I’m considering the release of thoses rpms as experimental, they work for some, and sometimes are convenient. But they can also create kernel segfault on some configurations. If you are in trouble start your openSUSE in rescue mode with nomodeset on boot line, and then zypper rm fglrx related packages, reboot and you should safely return to free radeon. Today, while packing my stuff for the SUSECON15 in Amsterdam, I was pleased on irc to have feedback of users who were able to run fglrx Tumbleweed packages on their Leap 42.1 I’ve then start a Leap vm and hack a bit Sebastian Siebert’s script for 15.9 to add support for Leap. The drivers build, and install correctly. I’ve also updated the one-click installer for people using this Leap being available only for x86_64 bit plateform, the driver follow the same available arch. Links to the new repository openSUSE_Leap_42 Both comments and pings are currently closed.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00022.warc.gz
CC-MAIN-2022-49
1,469
14
https://drop.id/career/
code
Let's participate and take your part to make our world better by using digital technology. This is our journey. How we learn things, create things, and build purpose to society through the digitization. Act to Impact Goal-Oriented. Focus on the purpose. Always listen and aim to solve the problem. Data-driven and Evaluate often. Don't be afraid to make mistakes. Be a Patron Be kind, honest, and back each other up. Move and grow as a team. And, be fun! Let us know if there are a position matches you We will be glad to receive the application from you. Send us your CV and Portfolio* through email to [email protected] or just click the image beside.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099942.90/warc/CC-MAIN-20231128183116-20231128213116-00744.warc.gz
CC-MAIN-2023-50
666
8
https://minmax.gg/isaac/consumables/ix-the-hermit
code
#10 · Card IX - The Hermit "May you see what life has to offer" Teleports Isaac to the shop. If there is no shop, this will act as a random teleport. If the shop has not yet been accessed, this card will unlock the door without requiring a key (as long as the player exits through the main entrance).
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351454.16/warc/CC-MAIN-20210225182552-20210225212552-00613.warc.gz
CC-MAIN-2021-10
301
4
https://sunnyside-times.com/pages/help/help-with-navigation/
code
Help with Navigation The black bar with topics across the middle of the page is the primary way you move around the site. You may move your mouse over any of the topics and click to be taken to that page. Note that these menus may work a little differently from some others you might be familiar with. When you open a page, below the menu bar will be a list of pages you have visited to get to the page currently being viewed. You may click your mouse on any of these selections and you will be taken to that page. In this example, you could go to the Home page or to the pages listing all the activities. Many pages show options using "Buttons" to make selections. On these pages, clicking on a button will open a corresponding page. Again, you may revert to a preceding page using the headings under the black bar. If you're unsure of how to navigate, give it a try. We think you'll find getting around this system very easy and enjoyable.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00194.warc.gz
CC-MAIN-2022-27
941
7
https://thesportseconomist.com/shermer-on-doping/
code
Michael Shermer is a columnist at Scientific American who is increasingly interested in Economics. His recent book, The Mind of the Market might be well worth your perusal. His current column at Scientific American examines doping in sport from both an economic and a personal perspective (Shermer is an avid competitive cyclist). Since it is free it's definitely worth checking out. Here's the economic essence of the piece: To end doping in sports, the doping game must be restructured so that competing clean is in a Nash equilibrium. That is, the governing bodies of each sport must change the payoff values of the expected outcomes identified in the game matrix. First, when other players are playing by the rules, the payoff for doing likewise must be greater than the payoff for cheating. Second, and perhaps more important, even when other players are cheating, the payoff for playing fair must be greater than the payoff for cheating. Players must not feel like suckers for following the rules. He lists five changes that would push incentives in this direction. Hat tip: Steve Levitt at Freakonomics.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00494.warc.gz
CC-MAIN-2020-34
1,110
4
http://bethart22.blogspot.com/2009/09/finally.html
code
Thursday, September 10, 2009 A small moment to post! Unfortunately, I haven't been spending all this time making cool things. But i am working on a project for my art education class that I'm pretty psyched about. We are keeping art journals all semester, so I'll be posting pic of my favorite pages and I'll make a video at the end of the semester with all the pages in. So enjoy and I welcome any kind of criticism, I've never really done this before, so tips are welcome!
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120694.49/warc/CC-MAIN-20170423031200-00431-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
474
2
https://forum.qt.io/topic/58263/building-qt-4-8-7-with-msvc-2015
code
Building Qt 4.8.7 with MSVC 2015 I am trying to revive a 4.X qt project (which I then plan to migrate to Qt 5.x), but I would like to build with MSVC 2015. I am running in a MSVC 2015 native x32 command shell the following configure line to keep things simple: configure -platform msvc-2015 But i am running into a compile error: C:\Qt\qt-everywhere-opensource-src-4.8.7\src\corelib\io\qfsfileengine_win.cpp(64) : fatal error C1083: Cannot open include file: 'shlobj.h': No such file or directory I have installed all of the windows development kits, and a find reports ShlObj.h in the following directories: Program Files (x86)/Microsoft SDKs/Windows/v7.1A/Include/ShlObj.h Program Files (x86)/Windows Kits/10/Include/10.0.10240.0/um/ShlObj.h Program Files (x86)/Windows Kits/8.1/Include/um/ShlObj.h Do I need to tweak the qmkspecs to pull in a particular kit path? Any suggestions how to get the build working? I can't believe I am the first to see this... Did you manage to compile Qt4 with msvc2015?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00848.warc.gz
CC-MAIN-2023-50
1,003
12
https://synthiam.com/Community/Questions/Variable-mmay-mmaz-Or-Mmax-Not-Defined-720
code
Asked — Edited In the MMA7544 window if wrote balance script with (if )statement press run script it stops here saying Variable $mmay not defined Whats really weird is that the whole script ran fine till I saved my file and reopened it an hour later. This is a bug right? this is latest version software 3/7 Upgrade to ARC Pro Your robot can be more than a simple automated machine with the power of ARC Pro! If the control assigns the variables you must use the control before the variables are created, this is not done on project open. To avoid script errors you can define variables in an init script however until the control is used they will remain whatever set in the init script (it just wont error in the script). 1. mma7455 responds i write script and robot balances 2. I save file and close ARC 3. open file later, confirm mma senor reacts to movement, select run script 4. script stops on IF saying (variable $mmay not declared) I closed and re opened ARC a few times and then it ran through script fine only guess to problem: the set variable check box would not stay selected. Also run script timer still never stays selected for me once out of edit window. Thanks for responses they helped, I'll share my biped balance script with community once smoothed out.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645810.57/warc/CC-MAIN-20230530131531-20230530161531-00118.warc.gz
CC-MAIN-2023-23
1,277
18
https://pspdfkit.com/guides/ios/annotations/synchronization/
code
Synchronizing PDF Annotations on iOS PSPDFKit for iOS supports two approaches for synchronizing annotations to your server, and across multiple users, devices, or sessions: Building your own solution using our APIs Both approaches are deployed on your infrastructure and can be integrated with our web, mobile, and desktop SDKs. |Instant Synchronization||Building Your Own| |Real-time sync||Built in||Not built in| |Conflict resolution||Built in||Not built in| |User authentication||Built in||Not built in| |Offline synchronization||Built in||Not built in| |Incremental sync||Built in||Not built in| PSPDFKit Instant is our prebuilt solution for synchronizing annotations to your server, and across multiple concurrent users, devices, or sessions. It’s a licensable component that’s included as part of PSPDFKit Server. Instant consists of three parts: the PSPDFKit Server backend that synchronizes documents and annotations and manages authentication; the PSPDFKit Instant component that handles conflict resolution, version tracking, diffing, and merging; and the web, mobile, and desktop SDKs that integrate into your app. Learn more about PSPDFKit Instant. Building Your Own Solution PSPDFKit web, mobile, and desktop SDKs have easy-to-use APIs for importing/exporting annotations and forms data as part of a robust synchronization solution. ℹ Info: When building your own solution, our technical support team will gladly help with questions specific to using our SDK. However, due to the complexity of building your own synchronization solution, support is limited to PSPDFKit technology.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100489.16/warc/CC-MAIN-20231203062445-20231203092445-00043.warc.gz
CC-MAIN-2023-50
1,599
16
https://www.robotmissions.org/2020/01/12/
code
Currently at step #70 / 155 for the operator interface kit. The next kit log will make more progress on this, as mentioned in the previous kit log. But first, to make more productive progress, a better plan will be needed. If the progress continues at the rate of today’s kit log, it will be very slow. A better plan about timing will help the amount of progress.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816734.69/warc/CC-MAIN-20240413114018-20240413144018-00428.warc.gz
CC-MAIN-2024-18
365
1
https://wiki.ubuntu.com/Testing/VirtualBox
code
ISO testing with VirtualBox VirtualBox is a useful tool in testing ISO images since you can create imaginative disk configurations, erase or resize disks without concern for your system and test images while working normally on your machine. You can do the same thing with VMware Workstation but VirtualBox has a GPL version and can be used without a serial number. KVM is another virtualization tool, but requires a Linux host and a CPU with hardware virtualization capabilities (Intel VT-x or equivalent), where VirtualBox can use non-Linux hosts and will use software emulation on CPUs lacking VT-x support. Virtualbox OSE is included in Ubuntu's universe repository for releases from Gutsy to Quantal. In newer releases it is deemed less free and so is named "virtualbox" and is in the multiverse repository instead. In either case, you just have to install the "virtualbox-ose" or "virtualbox" package using synaptic, adept or one of the command line tools (sudo aptitude install virtualbox-ose , for example). In some earlier releases, when using apt-get install, the necessary kernel module won't be installed (recommended package virtualbox-ose-modules) automatically and you will need to install it afterwards. Since Intrepid, installing virtualbox-ose seems to be enough. In order to get access to the kernel module you have to add your user to the vboxusers group (created by the installer). Go to System -> Administration -> Users and Groups. Click Manage Groups and scroll down to the vboxusers group. Click Properties and add your user to it. Log out and back in for the group settings to take effect. Setting up a virtual machine Click New to create a new Virtual Machine (VM). Follow the instructions on the VirtualBox site. Setting up VM hardware Give the VM a name and select Linux 2.6 as the OS. If you don't already have a virtual disk defined you need to create one. Select a dynamically expanding image. 3 GB is probably a good size. Less for Xubuntu or Ubuntu-server. Mounting an ISO You could burn the ISO to a CD and mount that in VirtualBox, but it's more convenient to simply mount the ISO directly. Starting your VM Click start to boot into your new VM. You should see the boot screen from the CD. Boot and install as normal. When installation is complete, reboot from within Ubuntu and the virtual CD should be automatically unmounted. If not, press F12 when VirtualBox restarts the VM and select booting from the hard drive. You can set up the VMs as you test, but you can also set several machines up ahead of time. That way you can set up several machines, each with its own hardware configuration, linked to a virtual disk and an ISO image. Giving each VM it's own disk will require a good deal of free space on your test system, so you may want to share a generic disk for installs (just connect to it before starting a given VM). If you have an rsync script on a cron job you can have the ISOs you intend to test downloaded overnight (or while you are away). The links from VirtualBox to the 'CD drive' (which you point to the rsync target) will stay mounted. When you want to test a given ISO you just start the pre-defined VM and VirtualBox will boot the freshly rsynced image. This is useful when testing the same case repeatedly and when time gets short closer to release. Guest Additions in Jaunty There is an issue with VirtualBox Guest Addons and Jaunty. When trying to install the Guest Additions in Jaunty you might get the following error: Warning: unknown version of the X Window System installed. Not installing X Window System drivers. the reason why is that the 2.1.4 vbox video drivers do not yet support Xorg version which ships with Alpha 6. There is a workaround to make it work: You need to extract the installer. So run VBoxLinuxAdditions-x86/amd46.run --target ~/tmp (you need the destination folder first of course). Then check the install.sh file, on line 415, you will see this: 15 1.5.99.* | 1.6 ) Now after the 1.6, add a .* (making it 1.6.*). This is needed, because the version that is returned by X -version is 1.6.0, not 1.6. After you've fixed the install script, run it as sudo.
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00318-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
4,145
25
https://www.fr.freelancer.com/projects/visual-basic/scrolling-text-that-can-modified/
code
Simple App. Must be in VB6, must provide source, so I can fine tune after you get it working (for example, pick any graphic, I can change it later). See attached mock up. This would be a welcome slide in our church auditorium. This is one form. The text in the welcome box scrolls across the screen. As new visitors arrive, you can type a name in the lower box (where it says "George Washington" in the mock up) and the hit the "Add Name" button. Then the next time the scroll screen starts, it adds " . . . " plus the new name to the end of the scroll string. There does not need to be a database, this can all be just variables, and when the app closes, it looses all the names. FYI: with our projector, I can adjust it so the lower portion of the screen (the data entry portion below the red line) is off the screen an only visible on the laptop, but does not get projected on the screen. 24 freelance font une offre moyenne de $54 pour ce travail this realy is very easy. i can code this program on pascal instead of vb. i will make it fully customizable so you won't have a need to get into the code. everything you need will be changed via the settings. Hi, I am highly interested in working with you. Please see PMB for details. Best Regards, Bilal Niaz Hi !! I have done several projects in VB and I could do this simple job in a couple of hours. Please check your Private Message Board for more details. Thanks & Regards fuchsiasoft Hi, “CDOC INFOTECH”, an IT division of C. Doctor India Pvt. Ltd, is located in Ahmedabad, India. Our expertise is in ERP system. We develop applications using VB with back end Oracle , MS SQL Server, MS Access. Library Plus Dear buyer: I am an experienced vb 4-5-6 coder and this job is really a piece of cake for me. I can deliver it in no time. I also think it's a good idea to include capability to change the background image and to allow Plus I myself am a MICROSOFT CERTIFIED PROFESSIONAL and am running a software company in New Delhi for the past 5 years. We can provide you with the solution just as per your requirement. We can assure you of the best of Plus Hi, We can develop the software better way. You need not adjust the projector to hide the text box, as soon as the mouse is moved or key pressed, the textbox can be made visible. Be assured of Quality delivery Plus please go through our profile for more details. Alchemist Techno-Labs is a venture of youngsters working in different areas of technology... We are committed to our goals and missions, and are timely updated on the Plus
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864822.44/warc/CC-MAIN-20180622220911-20180623000911-00372.warc.gz
CC-MAIN-2018-26
2,549
11
http://www.funnyjunk.com/I+still+feel+ashamed/funny-pictures/4883752/
code
I still feel ashamed Adjust content blockingContent Blocking I ['. to pay the smartest kid IT my class ) if he , lii, write a pa , - r that , get an A fer Alla rit' t' i e he sent it to me, I went threadah " g-‘:: and made a let of mistakes to get me a law B ?'':') wouldn‘ -t have to pay him .,'r'
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867217.1/warc/CC-MAIN-20180525200131-20180525220131-00213.warc.gz
CC-MAIN-2018-22
302
7
https://undo.io/about-us/
code
Undo’s technology allows some of the biggest software companies to improve the quality of their products and fix customer issues quickly. We help our customers focus on the things that matter by reducing the time, pain and effort involved in building the world’s most complex software. Undo is a fast-growing, venture-backed technology start-up based in San Francisco and Cambridge, UK. Undo was founded by Greg Law and Julian Smith who worked on evenings and weekends in Greg's garden shed to create a better debugging tool than the commercial and open source solutions which had frustrated them for many years. Following receipt of seed funding in 2012, the company is now the leading commercial supplier of reversible debugging tools for Linux and a pioneer in developing solutions that help companies to better understand their software. Undo is backed by an experienced group of investors including Cambridge Innovation Capital (CIC), the Cambridge Angels, Rockspring, Martlet and Jaan Tallinn, co-founder of Skype and Kazaa. December 07 2018 September 04 2018 Dec 06 2018, San Jose, California Oct 17-18 2018, Sunnyvale, CA 94089, USA
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826968.71/warc/CC-MAIN-20181215174802-20181215200802-00354.warc.gz
CC-MAIN-2018-51
1,144
7
http://derekwyatt.org/2015/07/27/getting-character-under-cursor-in-vim/
code
Getting the ascii / hex code of the character under the cursor in Vim is harder than I thought... - The simplest way to get it is to just show it on the statusline(or equivalently in lightline) with %B, as visible in the docs or in this Vim Tips entry. - The problem is that I can't get the value into a Vim variable this way. - Another way is to simply yank the damn thing with something like "zyl, which will put the value in the @zregister, as described here. virtualeditis set to anything, then this will return a spaceif there is no character under the cursor, for any mode in which - What's more is that I may very well not want to run a normalcommand like this, especially in - Another way is to do something like let ch = getline('.')[col('.') - 1], which is also described here. - This doesn't respect unicode characters because their length is "interesting". So here's what I eventually tried: function! LightLineCharacter() " Save what's in the `z` register and clear it let x = @z let @z = "" " Redirect output to the `z` register redir @z " Run the `ascii` command to get all of the interesting character information silent! ascii redir END " Clean up the output and split the line let line = substitute(substitute(@z, '^.*> ', '', ''), ',', '', 'g') let list = split(line) " Reset the `z` register let @z = x " `dec` and `hex` hold the values I want let dec = 0 let hex = 0 " If we've split something reasonable, then get decimal and hex values if len(list) >= 4 let dec = list let hex = list endif " Return it the way I want return dec . "/0x" . hex endfunction However, as I was researching the links for this post, I found the "real" answer: let dec = char2nr(matchstr(getline('.'), '\%' . col('.') . 'c.')) You can find the explanation in one of the comments in the StackOverflow post but I'll also explain it here. getline('.')returns the entire line that cursor is sitting on. col('.')returns the column number that the cursor is sitting on. - The regular expression \%ncmatches a specific given column where nis that column. - The ' .' regular expression matches exactly one character. So, let's assume that the column the cursor is currently on is 31. Then the above code evaluates to: let dec = char2nr(matchstr(getline('.'), '\%31c.')) which means that you're going to match the single character (multi-byte or otherwise) at position 31 in the string returned by char2nr() is going to transform the character to a decimal number. It's really not always that easy to figure out what needs writing when you're in Vimscript!
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00103.warc.gz
CC-MAIN-2023-06
2,545
34
https://play.google.com/store/apps/details?id=com.daddysoffice.lc3camera&pcampaignid=MKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1
code
A smartphone that is no longer in use turns into an AI surveillance camera! AI monitoring systems such as crime prevention, watching over children, and monitoring pets can be done with a single smartphone. AI that can use various detection patterns such as not only motion detection, but also object detection such as "people" and "cars", the number of detected human faces, and AI image analysis detection based on a machine learning model created independently. It is a surveillance camera. [Detection method] ------------- ・ Timer detection: Repeated recording / shooting at regular intervals ・ Motion detection: Recording / shooting when there is movement -Object detection: Recording / shooting when the specified number of objects exceeds the set value ・ Face detection: Recording / shooting with the number of human faces ・ AI detection: Recording / shooting based on a unique machine learning model ・ Shake detection: Recording / shooting when shaking is felt Also, by connecting to LiveCapture3 Remote, you can check live video on your smartphone wherever you are, such as on the go. Click here for details on LiveCapture3 Remote You can also connect to LiveCapture3 for Windows and use it as a network camera.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00788.warc.gz
CC-MAIN-2022-49
1,229
13
http://www.blogtalkradio.com/jaykubassek/2012/09/25/wake-up-with-jay-kubassek
code
It's hard to keep the focus on your mission in life sometimes when you don't see any results coming from it. What you have to ask yourself in this situation is this: if I am willing to stick with my mission, even if the results aren’t immediately apparent, will I or will I not be better off months – years – decades down the road? Sorry we couldn't complete your registration. Please try again. Please enter your email to finish creating your account. Receive a personalized list of podcasts based on your preferences.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170609.0/warc/CC-MAIN-20170219104610-00495-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
525
4
https://www.upwork.com/o/jobs/browse/skill/github/
code
We are a company in Japan developing web public apps + enterprise app development / IOS/Android applications etc..We're growing fast and looking for an experienced Technical Management Operations with excellent communication skills to support our CTO. The TMO will be responsible for supporting our CTO and facilitating crucial tasks across the company. Ideal candidates should be detail-oriented, organized, punctual, and able to think 10 steps ahead. Being proactive and able to make decisions with minimal guidance or communication is crucial in this role. Git experience, linux commands, Basecamp, rss, and basic auth, rss basic auth, zapier, shell scripts, a quick html fix, etc. This position will need to provide support for the CTO, including: Maintains standards for project management Build Manual for project management policies, processes and methods. GitHub Development and Installations Synchronize project development and information between Github
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645366585.94/warc/CC-MAIN-20150827031606-00255-ip-10-171-96-226.ec2.internal.warc.gz
CC-MAIN-2015-35
963
8
http://wingware.com/pipermail/wingide-users/2012-July/009831.html
code
[wingide-users] Py4A coding eric at earnst.com Mon Jul 9 22:21:36 MDT 2012 On Mon, Jul 9, 2012 at 6:41 AM, Wingware Support <support at wingware.com>wrote: > Eric Earnst wrote: >> I've just started playing with Python on my Android table through the >> scripting layer. I'm successfully communicating from Wing to the tablet >> but with the way it is setup, I don't get autocomplete in the editor. It >> looks like I can create an android.pi file with the hints but before I do I >> wanted to check to see if anybody else has already done this or found >> another way? > If you've got the debugger working, does it work to run to a breakpoint > and work in the live runtime state? Both Debug Probe and the editor should > offer correct runtime-sourced autocompletion in that case, when working in > code that's active on the stack. > Another idea, depending on what the android API is written in, is to make > sure that it's also on the machine where Wing is running and on the path > configured in Project Properties. If it's an extension module then you > would need *.pi files (either auto-generated by Wing or by some script you > provide) but if it's *.py then a copy of it would probably get static > analysis working for you. > Stephan Deibel > Wingware | Python IDE > Advancing Software Development It doesn't work in the debugger. You communicate between the PC python app and the tablet over a tcp connection using remote procedure calls (as I understand it, you first start a server running on the tablet using the android scripting layer) so I think you just need to know the API. Once you get the python app working you can move it over to the tablet and execute it there (or generate an apk to install it). All the files (in java) are on the tablet so I think the pi file is the way to go. If I can get it figured out I'll upload it to the Wiki when it's -------------- next part -------------- An HTML attachment was scrubbed... More information about the wingide-users
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827998.66/warc/CC-MAIN-20181216213120-20181216235120-00039.warc.gz
CC-MAIN-2018-51
1,984
35
https://bainbridgemusicperth.com/bluehost-web-site-hosting/
code
Bluehost Web Site Hosting Finding a high-grade inexpensive webhosting provider isn’t simple. Every site will certainly have various needs from a host. Plus, you need to compare all the attributes of a holding business, all while seeking the most effective bargain possible. This can be a whole lot to type via, especially if this is your very first time buying hosting, or constructing a site. Many hosts will certainly offer incredibly inexpensive initial pricing, only to elevate those rates 2 or 3 times greater once your first get in touch with is up. Some hosts will certainly supply totally free rewards when you register, such as a totally free domain name, or a cost-free SSL certificate. While some hosts will certainly have the ability to offer better efficiency and high levels of security. Bluehost Web Site Hosting Below we dive deep into the most effective economical web hosting plans out there. You’ll discover what core organizing functions are essential in a host as well as exactly how to analyze your very own organizing requirements to make sure that you can pick from among the most effective economical holding service providers listed below. Disclosure: When you buy a web hosting package with links on this page, we earn some commission. This aids us to maintain this site running. There are no extra costs to you whatsoever by utilizing our web links. The list below is of the best economical web hosting bundles that I have actually personally utilized as well as tested. What We Take into consideration To Be Cheap Host When we explain a web hosting bundle as being “Cheap” or “Budget plan” what we imply is hosting that falls under the price brace between $0.80 to $4 monthly. Whilst looking into economical hosting suppliers for this guide, we checked out over 100 different hosts that fell into that rate array. We then evaluated the high quality of their cheapest organizing plan, worth for cash as well as customer care. In this short article, I’ll be discussing this world-class website holding company and also stick in as much relevant info as possible. I’ll discuss the functions, the pricing choices, as well as anything else I can think of that I think could be of benefit, if you’re deciding to join to Bluhost as well as obtain your websites up and running. So without further ado, let’s check it out. Bluehost is among the biggest host business on the planet, obtaining both huge advertising support from the company itself and associate marketing professionals who advertise it. It truly is a huge company, that has actually been around for a very long time, has a huge track record, and is definitely among the leading choices when it comes to web hosting (definitely within the top 3, at least in my publication). But what is it precisely, and also should you obtain its solutions? Today, I will answer all there is you require to recognize, given that you are a blog writer or a business owner that is trying to find a host, and also does not understand where to start, considering that it’s a fantastic service for that audience as a whole. Allow’s visualize, you want to hold your websites and also make them noticeable. Okay? You already have your domain name (which is your site destination or URL) and now you want to “turn the lights on”. Bluehost Web Site Hosting You need some organizing… To achieve every one of this, and also to make your website visible, you require what is called a “web server”. A server is a black box, or tool, that saves all your web site data (files such as images, texts, video clips, web links, plugins, as well as other information). Currently, this server, has to get on constantly and also it has to be connected to the web 100% of the moment (I’ll be stating something called “downtime” in the future). On top of that, it likewise requires (without obtaining too fancy and also into information) a file transfer protocol generally called FTP, so it can show internet browsers your internet site in its desired type. All these points are either expensive, or require a high degree of technological ability (or both), to produce and preserve. And you can completely head out there and discover these points on your own as well as established them up … however what concerning rather than you getting as well as maintaining one … why not simply “renting out organizing” instead? This is where Bluehost is available in. You lease their servers (called Shared Hosting) as well as you release a website using those servers. Given that Bluehost keeps all your data, the company likewise permits you to establish your content management systems (CMS, for short) such as WordPress for you. WordPress is an incredibly prominent CMS … so it just makes good sense to have that alternative readily available (virtually every organizing company currently has this alternative also). Basically, you no longer need to set-up a server and afterwards integrate a software application where you can build your web content, independently. It is currently rolled into one package. Well … visualize if your web server remains in your home. If anything were to occur to it in any way, all your files are gone. If something fails with its interior procedures, you require a technician to fix it. If something overheats, or breaks down or obtains corrupted … that’s no good! Bluehost takes all these hassles away, as well as cares for every little thing technological: Pay your web server “rent”, as well as they will certainly care for everything. And when you buy the solution, you can then start focusing on adding content to your web site, or you can place your initiative right into your marketing campaigns. What Solutions Do You Get From Bluehost? Bluehost uses a myriad of different services, yet the main one is hosting of course. The holding itself, is of different kinds incidentally. You can lease a shared web server, have a dedicated server, or also an onlineprivate server. For the purpose of this Bluehost testimonial, we will focus on holding services and various other solutions, that a blogger or an on-line entrepreneur would require, rather than go unfathomable right into the bunny hole and also speak about the other services, that are targeted at more seasoned people. - WordPress, WordPress PRO, and e-Commerce— these holding services are the packages that enable you to hold a site using WordPress as well as WooCommerce (the latter of which enables you to do ecommerce). After purchasing any one of these plans, you can begin constructing your website with WordPress as your CMS. - Domain name Market— you can also acquire your domain name from Bluehost rather than various other domain registrars. Doing so will certainly make it much easier to point your domain name to your host’s name web servers, considering that you’re using the same industry. - Email— as soon as you have actually bought your domain name, it makes good sense to likewise get an e-mail address linked to it. As a blogger or on the internet entrepreneur, you should pretty much never ever utilize a cost-free e-mail service, like Yahoo! or Gmail. An e-mail similar to this makes you look unprofessional. The good news is, Bluehost offers you one free of cost with your domain. Bluehost additionally offers dedicated web servers. As well as you may be asking …” What is a specialized web server anyhow?”. Well, the important things is, the fundamental webhosting bundles of Bluehost can only so much traffic for your web site, after which you’ll need to update your organizing. The factor being is that the usual web servers, are shared. What this suggests is that a person server can be servicing two or even more web sites, at the same time, one of which can be yours. What does this mean for you? It means that the single server’s sources are shared, and also it is doing numerous jobs at any offered time. As soon as your website begins to hit 100,000 site check outs every month, you are mosting likely to require a dedicated web server which you can also receive from Bluehost for a minimum of $79.99 monthly. This is not something yous should bother with when you’re starting however you ought to maintain it in mind for sure. Bluehost Pricing: Just How Much Does It Expense? In this Bluehost evaluation, I’ll be concentrating my attention mostly on the Bluehost WordPress Hosting bundles, since it’s one of the most preferred one, and also likely the one that you’re searching for and that will certainly suit you the best (unless you’re a massive brand, company or site). The 3 readily available strategies, are as complies with: - Basic Plan– $2.95 monthly/ $7.99 regular price - Plus Strategy– $5.45 each month/ $10.99 regular cost - Option And Also Strategy– $5.45 per month/ $14.99 normal price The very first rate you see is the cost you pay upon join, and the 2nd price is what the price is, after the initial year of being with the business. So basically, Bluehost is going to bill you on a yearly basis. And you can also choose the quantity of years you intend to host your website on them with. Bluehost Web Site Hosting If you pick the Fundamental strategy, you will pay $2.95 x 12 = $35.40 beginning today as well as by the time you enter your 13th month, you will now pay $7.99 each month, which is also billed annually. If that makes any kind of feeling. If you are serious about your website, you must 100% get the three-year choice. This means that for the standard plan, you will certainly pay $2.95 x 36 months = $106.2. By the time you strike your fourth year, that is the only time you will certainly pay $7.99 per month. If you think of it, this technique will certainly conserve you $120 during three years. It’s very little, but it’s still something. If you want to get greater than one site (which I extremely suggest, as well as if you’re major, you’ll possibly be getting more at some point in time) you’ll want to utilize the choice plus plan. It’ll enable you to host unlimited internet sites. What Does Each Strategy Deal? So, in the case of WordPress organizing plans (which resemble the common holding plans, yet are more geared towards WordPress, which is what we’ll be concentrating on) the features are as complies with: For the Fundamental strategy, you obtain: - One internet site only - Secured internet site through SSL certification - Maximum of 50GB of storage - Complimentary domain for a year - $ 200 advertising credit history Remember that the domain names are purchased independently from the hosting. You can obtain a totally free domain name with Bluehost here. For both the Bluehost Plus hosting and also Choice Plus, you get the following: - Unrestricted number of web sites - Free SSL Certificate. Bluehost Web Site Hosting - No storage space or transmission capacity restriction - Totally free domain name for one year - $ 200 advertising and marketing credit history - 1 Workplace 365 Mail box that is cost-free for one month The Choice Plus plan has actually an added advantage of Code Guard Basic Back-up, a back-up system where your file is saved and also replicated. If any type of collision occurs and also your web site data goes away, you can recover it to its initial form with this function. Notice that even though both plans set you back the exact same, the Option Strategy after that defaults to $14.99 monthly, regular cost, after the collection amount of years you’ve picked. What Are The Benefits Of Using Bluehost So, why pick Bluehost over other host solutions? There are numerous host, many of which are resellers, but Bluehost is one select couple of that have actually stood the test of time, as well as it’s possibly one of the most well known around (and also completely reasons). Here are the 3 main advantages of choosing Bluehost as your host service provider: - Web server uptime— your website will not be visible if your host is down; Bluehost has more than 99% uptime. This is exceptionally important when it concerns Google SEO and also positions. The greater the far better. - Bluehost speed— just how your web server feedback identifies how fast your internet site reveals on a browser; Bluehost is lighting fast, which means you will certainly minimize your bounce price. Albeit not the most effective when it pertains to filling speed it’s still hugely vital to have a quick rate, to make customer experience better and also far better your ranking. - Unlimited storage space— if you get the And also strategy, you need not stress over the amount of files you store such as video clips– your storage ability is endless. This is actually essential, because you’ll probably run into some storage issues in the future down the tracks, and you don’t want this to be a headache … ever. Finally, customer assistance is 24/7, which indicates no matter where you remain in the world, you can get in touch with the support group to repair your site issues. Pretty common nowadays, yet we’re taking this for approved … it’s additionally very crucial. Bluehost Web Site Hosting Likewise, if you’ve obtained a totally free domain name with them, after that there will certainly be a $15.99 cost that will certainly be deducted from the quantity you originally bought (I imagine this is since it type of takes the “domain out of the market”, uncertain regarding this, yet there probably is a hard-cost for registering it). Finally, any type of demands after one month for a reimbursement … are void (although in all honesty … they must most likely be stringent right here). So as you see, this isn’t necessarily a “no questions asked” plan, like with several of the other holding options available, so make certain you’re okay with the policies prior to continuing with the hosting.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00154.warc.gz
CC-MAIN-2022-49
13,876
82
https://developer.blender.org/T82388
code
Operating system: Win 10 Caused by rB4c7b1766a7f1: Fix undo steps not allowing re-using old BMain in non-global undo. Short description of error In the sculpting template, the Undo (ctrl+z) function doesn't undo the first stroke. However, if you then click "Redo" (shift+ctrl+z), it will actually undo the first stroke... This is a separated issue, see T82532: Sculpt Redo does not always properly apply the first stroke of the sculpt session Same issue cause a slightly different bug when one switch from Object mode to Sculpt mode, draws a stroke, then undo twice: the object does not go back to Object mode as expected. Exact steps for others to reproduce the error - Perform a brush stroke - Undo (ctrl+z) - It should undo the brush stroke, but nothing happens. Redo (shift+ctrl+z) - Now it will undo the brush stroke. - Default startup file, switch to sculpt mode. - Draw a stroke. - Undo twice. Even not using the sculpting template, you can still replicate this erratic behavior in sculpt mode, if you load a texture/perform a brush stroke and try to undo... the undo won't work, but the Redo will undo the brush stroke...
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00532.warc.gz
CC-MAIN-2021-10
1,129
14
http://www.esri.com/legal/section508/ai81-known
code
ArcInfo 8.1, 8.1.2, 8.2, and 8.3 Last updated June 16, 2004 Note: The extensions carry the Section 508 compliance rating of the main product. At the time of this writing, these are the currently known issues related to Section 508 compliance. This list may not be comprehensive and will be periodically updated as more information becomes available. Not all dialog boxes provide access through keyboard equivalent. (§1194.21 (a)) The Select by Location and Select by Attributes dialog boxes do not let the user "tab" through the options on the dialog boxes from the keyboard. The mouse must be used. These issues have been addressed in ArcInfo 9. The ArcTools application within ArcInfo Workstation does not provide keyboard shortcuts for all menu selections. Note: Keyboard alternatives are required only when the function or the result of performing a function can be represented with words. Many GIS functions (such as drawing lines on the screen) cannot be represented with words, and therefore no keyboard equivalent exists for these functions. Information about user interface objects (§1194.21 (d)) The main menu items in ArcMap and ArcCatalog do not provide sufficient information about the user interface elements to be reliably used with some assistive technologies (e.g., narrator). MapTips and tables do not respect user-selected display attributes. (§1194.21 (g)) The tables environment of ArcInfo desktop does not use the system settings for font properties. However, the font used in the tables environment can be reset from within the application itself. The color of MapTips text cannot be changed via global or application settings and remains black. Animated information shall be displayable in at least one nonanimated mode. (§1194.21 (h)) ArcInfo desktop uses a spinning (animated) globe to indicate work in progress. Currently, a nonanimated equivalent option is not available.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.57/warc/CC-MAIN-20180422002521-20180422022521-00009.warc.gz
CC-MAIN-2018-17
1,903
15
https://community.spiceworks.com/topic/141745-considerations-for-finance-data-server
code
We've just upgraded our Quickbooks from Premier to Enterprise, and I was able to swing getting an extra Windows Server license for it. I'll also be moving our finance share (around 15GB) over to it. My reason for this is primarily that my primary file server stays very busy, especially RAM usage, and I'd rather not add anything on top of that - my ESXi hosts have plenty left over. My other Windows servers are all running multiple roles, and I'd like to avoid adding any more to them. It also strikes me as an opportunity to maybe be a bit more secure on it - best practices and all. It'll be running Server 2008 64-bit. Only a handful of users need access to the share on it. Any thoughts? Most of my other servers are pretty straightforward, but since there's critical company and employee data on this one, it'd be good to at least consider what more can be done. This is an internal machine - there won't be any access from outside. Like any server, plan for layered security. Both share and file level security, with least privilege needed. Install as few apps as you can, and make sure logging and auditing is enabled. Make sure as few people as possible have access to the share, and make the share hidden. Depending how many people have it, map it for them on there profile, so as few people as possible no its network location. And tell them its cursed, any messing about will kill off there livestock for two generations. Good tips so far, thanks. Nothing outside of the Quickbooks database manager is installed, and auditing is enabled. The drive is mapped by group policy preferences with item targeting, so only members of Finance get it. I should be doing it on all my servers, but I enabled Windows Firewall and all the exceptions too. How about encryption - does anyone do anything of that nature? Yes, but we have took look at the benefit to your situation. Each Quickbooks user would need access to the data, and they would need to be able to decrypt it, and if they decrypt it, they can copy it. If you are looking at users outside the group your other securities may already provide the needed protection beyond the programs internal security. There may be some benefits depending on your physical security, but in general this may not be the best place to start with encryption outside an application if you are not already using it.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00346-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
2,357
14
https://len3a.com/blogs/news/lovell-design-the-design-of-a-product-is-more-than-just-creating
code
The design of a product is more than just creating an object. The aesthetics of a product for me is just as important to its overall design. Dependant of what material it’s made from we make so many assumptions on how well it will preform and how well it will do its job at a glance. Is it durable? Is it fragile? Will I want to sit back and admire it from a distance or will I want to go and touch it? The design and aesthetics of a product are valuable, get these two combinations right and you will create beautifully designed products everyone will value.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00449.warc.gz
CC-MAIN-2024-18
561
1
https://blog.calitunes.com/new-calitunes-updates/
code
Calitunes New Updates: We have successfully rewritten all the ugly “URLs” in Calitunes, this will also increase #Calitunes SEO ranking.. this was reprogrammed last night with Uti Mac and Samuel Douye Ogu at #Shankara Hotel Calabar. Samuel Douye Ogu a very good friend of mine, that travelled all the way from #uyo to #calabar just to contributes in the development of #calitunes, though last night was a sleepless night for us. So these are some of the calitunes new URLs-> - The user’s profile page url, now looks like this http://www.calitunes.com/utimac instead of the long ugly old url with parameters. it’s now very easy to share your Calitunes profile url with your friends around the world!. The new track/music page url, now looks like this-> http://www.calitunes.com/track/2299 this also makes it easy to share tracks/music direct links! The new video streaming/download page url, now looks like this-> http://www.calitunes.com/video/64 this also makes it easy to share videos direct links! The new playlists url of a particular user, now looks like this-> http://www.calitunes.com/utimac/playlists The new playlists page url , now looks like this-> http://www.calitunes.com/playlist/2 Others include http://www.calitunes.com/explore, http://www.calitunes.com/stream, http://www.calitunes.com/explore/videos, http://www.calitunes.com/upload, http://www.calitunes.com/invite, http://www.calitunes.com/join, etc there are many more, i can’t list them all here, you can check via www.calitunes.com for all the new url, to get used to it. Note: if you found any bugs on the urls kindly report to us, Calitunes will reward you for that! Thank you, Happy Streaming…. More coming soon..
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00039.warc.gz
CC-MAIN-2020-10
1,700
13
https://forums.macrumors.com/threads/is-there-any-way-to-ascertain-the-name-of-a-previously-deleted-folder.2046510/
code
Hello and thanks to anyone who cares to read my question! So I had a password that I stored as a filename on a folder (...) so I could easily look it up/remember it. By accident I managed to delete this folder (...) Im not sure exactly when it happened but it could be a while since it happened - like maybe a month, for example. So is there some way to ascertain this folder name again now..? Somewhere you can look up changes to directory trees? A log of deleted files/folders? Any ideas? Anything? Any help would be greatly appreciated. PS: Im running OSX 10.9.5, but I sort of guessed any possible solution to this wouldn't be "version-specific" --- Post Merged, May 20, 2017 --- I forgot to add that the folder name is "username password"... and I remember the username... So I would recognize it/it is searchable.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825728.30/warc/CC-MAIN-20181214114739-20181214140239-00279.warc.gz
CC-MAIN-2018-51
819
1
https://www.silo.ai/blog/kaggle-computer-vision-competitions
code
These days it is possible to learn pretty much anything online. Kaggle is a platform that allows people into machine learning to learn together. Many Silo.AI experts are familiar with the global online community of over one million data scientists. Particularly two of us, AI Engineer Joni Juvonen and AI Scientist Mikko Tukiainen can’t seem to get enough of exploring Kaggle’s computer vision competitions, which they tackle with their two-person team rähmä.ai. Currently, Joni and Mikko are fighting for a spot on the top 10 at the Deepfake Detection challenge, where the winners (typically top 5 participants) will get a stunning $1 million. The challenge to identify Deepfake content has been put together by companies like Amazon Web Services, Facebook, Microsoft and academics. Let’s hear from Joni and Mikko what makes Kaggle such a special way for them to stay up to date on the latest computer vision technologies and learn together. What is Kaggle? Kaggle is an online community of 1.000.000+ registered data scientists. In the central part of the platform are the data-science challenges, that often have a machine learning twist and come with some form of winning prize like kudos, swag or money. Anyone can enroll in the competitions to test their skills. This makes Kaggle an amazing way to see where you are as a data scientist and learn from the community as you go from one challenge to the next. To me Kaggle is, most of all, the world's largest community for data scientists and machine learning practitioners. In Kaggle, users can publish their own datasets, write and share code, and use Kaggle's cloud-based Jupyter Notebooks to build models. All of this is available for free and can help you as a data scientist to learn faster. In a quickly advancing field such as artificial intelligence and machine learning, it’s crucial to have a community where you can boost your skills on a regular basis to try out new models on fresh real-world datasets. What kind of challenges have you participated in? I started Kaggling two years ago, in 2018. Since then I’ve participated in nine computer vision competitions. My most successful one so far was to score on the top 3% in Histopathologic cancer detection. The best position in a serious contest was at the top 6% when we teamed up with Mikko and Antti Karlsson to detect steel defects for the steel and mining company Severstal. My previous competition was organized by Uber competitor Lyft to improve 3D object detection for self-driving cars. I also wrote about my experience with LIDAR U-Net model. I got interested in Kaggle by getting drawn in by Joni. Since then, I’ve joined four competitions, all together with Joni. All of these have had the computer vision aspect in common and they have been tremendously helpful in getting experience with various kinds of industry problems. Kaggle is an important hobby for both of you. What makes tweaking your models in the online community so fun? The problems presented in the competitions are topical and often come from real life needs, such as the ongoing Deepfake Detection Challenge (rähmä.ai is in the top 15 at the moment). Kaggle has made it easy to participate and to get motivated by receiving different rewards throughout the challenge. For example, you can get praised by sharing your findings with other Kagglers in the community. Kaggle veterans often openly share their knowledge, so that each competition can be an oasis for learning the newest and best tricks in the field. The competitions offer an excuse to learn new skills and a to try them in practice with real-world data. I think learning and puzzles are fun, but what quickly got me addicted was the competition aspect. Seeing your solution get a high score on the competition’s leaderboard is rewarding. As the challenges usually last a few months, it's a race of problem-solving and continuous improvements to keep up and finish with a top score. Many others would like to develop their data skills. How to get started with Kaggle? It’s really easy to get started: you simply sign in with your Google account, fill in some personal information and then you’re ready to go. The scripts are written in Kaggle kernels that are similar to Jupyter Notebook, a code sharing and documentation tool we use at work too. Kaggle has a weekly GPU computation quota so that (small) models can also be trained on-site. Other competitors on Kaggle very eagerly share starter kernels along with baseline models, so no one will need to start developing their solutions from scratch. Pick an ongoing competition that interests you and get familiar with the problem by exploring the forum and baseline notebooks that others have shared. Then, you may copy, lightly modify, and submit one of the public notebooks to get yourself on the competition’s leaderboard. What have you learnt at Kaggle that is useful in your job as AI Scientist? I’ve learned a lot in regards to teamwork skills, sharing and searching for ideas. What comes to computer vision, I’ve been able to test many new CV techniques, that I look forward to using in a client project too. As always with AI projects, I’ve also had to gain a big bunch of mindfulness for the data preparation. The field of deep learning and computer vision is advancing rapidly, and many of the methods I learned from school are no longer something you would use in practice. For me, Kaggle competitions offer a fun way of keeping up and getting familiar with new advances. To score high on any challenge, you have to learn all the new tricks that gain an advantage. Kaggle forums and notebooks are usually the first time I see a new method in action. After a competition ends, the winning teams usually share their solutions, findings, or even code, and I have learned a great deal by just reading them. The best thing I got from Kaggle, however, is the hands-on practice. Last year, I gained experience from Kaggle competitions in detecting metastases from tissue images, classifying diabetic retinopathy and cellular images, identifying pneumothorax from chest x-rays, detecting different defect types from steel plates, locating 3D objects from self-driving car's LIDAR sensor, finding vehicles position and translation from images, and identifying deepfake videos. Many of these learnings I’ve been able to bring into my client work at Silo.AI too. Follow team rähmä.ai’s current competition for detecting deepfakes. Joni and Mikko look forward to open sourcing their model and to sharing more about how they built it after the competition is over – subscribe to our newsletter to stay tuned. We’re looking for more curious AI experts like Joni and Mikko! Get in touch with our recruiter [email protected] to discuss our current projects and check out our open positions. Want to discuss how Silo AI could help your organization? Join the 5000+ subscribers who read the Silo AI monthly newsletter to be among the first to hear about the latest insights, articles, podcast episodes, webinars, and more.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817171.53/warc/CC-MAIN-20240417173445-20240417203445-00001.warc.gz
CC-MAIN-2024-18
7,069
23
https://topassignmenthelp.com/write-a-synthesis-paper-comparing-and-contrasting-a-media-report-to-the-original-research-article/
code
Write a synthesis paper comparing and contrasting a media report to the original research article. Write a synthesis that compares the media report to the original research article. This synthesis should address: How similar are both accounts? Did the media report clearly and accurately describe the research? Were there any distortions in generalizing the results? Do you believe the popular press article accurately portrayed the topic? If not what did they misunderstand or fail to mention? Was the media report summarized objectively without bias?
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00212.warc.gz
CC-MAIN-2021-43
552
1
http://stackoverflow.com/questions/4035891/how-to-call-a-custom-ruleset-xml-for-php-code-sniffer
code
I'm trying to write an custom ruleset.xml for php code sniffer but calling it from the commandline without putting it in the default folder doesn't seem to work. Since the documentations seems to state otherwise i'd like to ask if i'm doing something wrong here :~/$ phpcs --standard=/home/edo/custom_ruleset.xml source/ ===> ERROR: the "/home/edo/custom_ruleset.xml" coding standard is not installed. The installed coding standards are PEAR, PHPCS, Zend, Squiz and MySource If that doesn't work: Any suggestions on how to ship your own coding standard with your source ? Thanks
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257830066.95/warc/CC-MAIN-20160723071030-00268-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
578
4
https://www.mail-archive.com/[email protected]/msg10021.html
code
I've installed Git for Windows from https://git-scm.com/ and after installation I can use git cmd or git bash. I believe that `cmd` is command line from windows and `bash` - the same from linux. I can't find the difference in use. Except some colors and autocomplete with double-tab in `bash`. What the main differences between them? What the point to add linux command line in windows client? You received this message because you are subscribed to the Google Groups "Git for human beings" group. To unsubscribe from this group and stop receiving emails from it, send an email For more options, visit https://groups.google.com/d/optout.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00535.warc.gz
CC-MAIN-2018-05
637
11
http://buyavape.co.za/legal-marijuana/legal-marijuana-how-high-is-too-high/
code
Marijuana is now legal in Oregon, and stoners statewide are celebrating. But is there such a thing as being too high? We talked to some longtime weed smokers to figure out what responsible marijuana consumption actually looks like. Subscribe for more videos: http://www.youtube.com/channel/UCV3Nm3T-XAgVhKH9jT0ViRg?sub_confirmation=1 Like us on Facebook: https://www.facebook.com/ajplusenglish Download the AJ+ app at http://www.ajplus.net/ Follow us on Twitter: https://twitter.com/ajplus Video Rating: / 5
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00319.warc.gz
CC-MAIN-2019-30
507
6
https://virgili0.github.io/Virgilio/purgatorio/define-the-scope-and-ask-questions/frame-the-problem.html
code
Let's dive right in! # Recap of ML systems The process of Data Science, i.e. the extraction of knowledge and decisions from a set of data, is composed of several steps. Simplifying as much as possible, we try to frame the problem we want to solve, then we study the data available, then we create models (Machine Learning models) that are used to make predictions or estimates. The definition of the problem is the first phase and also guides all the choices of design, implementation, and integration that will come later during the project. In particular, it is fundamental to classify the type of forecast you want to obtain once you have built the Machine Learning models: what kind of prediction should the system make? Should it predict a number or a label? Or will it have to group new data with those most similar to it? Or should he predict the next content to be recommended to a user? As we saw in the guide Introduction to ML systems, there are various ways to classify a problem related to data and learning from them, and in particular, the most important classification to frame the problem is as follows: is the problem a supervised or unsupervised learning task? A supervised problem is a problem in which the data we want to learn from are "labeled": for example, images labeled according to their content, or the data of a loan applicant who knows how many times he managed to repay the debt or not. By "showing" a Machine Learning algorithm so many examples, we hope that it will be able to generalize to new cases never seen before and predict the right label (not necessarily a specific class, it could even be a number). An unsupervised problem is without these labels, and our "learning from data" is usually about finding similarities between the various elements of the dataset and grouping them (clustering and other unsupervised learning techniques). The type of problem we will focus on in the Purgatory guides is the "supervised" type, which are the most common problems and those faced by neural networks (very powerful and flexible Machine Learning algorithms). We will see in the Machine Learning Theory guide also some methods to deal with unsupervised problems. After this summary, let's see what questions we ask ourselves (and which we ask the experts of the domain) to frame a data problem. The first thing to do when dealing with a new problem is to understand what kind of data I have available. Remember, without data you're not going anywhere! # Understand the data The first thing to do, whether you are carrying out a project of corporate scope or for yourself, is to understand what kind of data you have available: the process then requires you to start from the data to formulate a problem, not the other way around! As widely explained in Paradise, good-quality data is the only thing really necessary for quality of the Machine Learning system you want to train. When we look at the data we should ask ourselves the following questions: What are we supposed to do this with this data? This may sound like a stupid question, but we shouldn't assume that our aim is to cover the whole process of Data Science, from data collection to the creation of predictive models of Machine Learning. Maybe what we have to do is just a sub-stage of the ones that make up the complete process. For example, we may just have to clean up the data and format it well, or analyze it and report back to other considerations about it. Or we might just have to do some statistics on them to observe six phenomena, so avoid the part of creating the model. Nobody wants to waste time doing things that are not necessary! What form do the data take? The data we have at our disposal can be of a completely different type: tabular data: As the name indicates, data extracted from database tables (not necessarily relational). They are data organized in rows, have attributes, and generally a way to identify them uniquely. The rows are divided into fields, which in turn can contain various types of data (numeric, textual, links to images). This type of data is also called "structured". For example, I could have a database that collects user names, their ages, and associated tweets, in which case I would have tabular data that includes both text data (names and tweets) and numerical data (age). text: For example, application logs, tweets, textbooks. We can consider textual data all that is made up of characters. Text data can then be inserted in the context of tabular data, as a separate field. categorical: Categorical data is a subset of textual data: it does not contain information about the language, as a tweet could do, and often it is not even a sentence. For example, they could be "Red" or "Expired" and are considered labels rather than text data. numerical: Numerical data is any type of data calculated from a computer (opens new window): Integers, floats, doubles, etc. They can be continuous (e.g. temperature in degrees) or discrete (evaluation of a product). In the first case, the accuracy of the data measurement determines the type of numerical data used (usually float). audio: Audio files can be of various formats (opens new window), which greatly influence the resolution (and therefore the amount of information they contain). They can be phone recorded conversations, for example, customer care, or environmental sounds, or even animal sounds. images: Images can be presented in various formats (opens new window), which affect the maximum resolution and number of colors of the image (RGB or B&W). Images have the characteristic of being very rich in information, so often there is the problem of storage, and reading/writing them on disk can be a costly operation. Image data often has the advantage that with some scraping trick (downloading data from the Internet) you can enlarge the datasets and thus improve the performance of Machine Learning models. videos: Videos can be classified into two types: streaming (in real-time) and recorded (saved on disk). Videos are a fairly complex format to handle and extremely heavy from a storage point of view, so it is recommended not to start from this type of data if you are a beginner. time series: Time-series is a collection of data about events in time. This kind of data consists of historical series, such as the series of surveys of a sensor, or the history of the interactions of a social user. This type of data is useful to predict future behavior based on previous observations, such as predict whether a machine is about to break or if a user will buy a certain product. Time series have the concept of granularity, which is the amount of time between measurements: for example, we can have daily or annual data, or even a new example recorded every microsecond. Do we know the data is raw or unclean? By raw we mean the data as we come into possession of it. These can be clean and well-structured, or dirty and to be cleaned. In general, real-world data is hardly ever clean and tidy. There may be a lack of values, they may be unstructured, they may be superficially collected! They may be of low quality (e.g. an audio file with noise) or have useless information. So generally you always have to think about the raw form in which we have the data, and what cleaning steps will be needed to make it usable by Machine Learning models. This phase (called data cleaning or data preprocessing) is vital during the Data Science process, and is often spent like this 80 percent of a Data Scientist's time (opens new window). In the Purgatorio section "Work with data" there is an entire guide dedicated to data cleaning. - Are the data labeled or not? From this StackOverflow thread (opens new window): Typically, unlabeled data consists of samples of natural or human-created artifacts that you can obtain relatively easily from the world. Some examples of unlabeled data might include photos, audio recordings, videos, news articles, tweets, x-rays (if you were working on a medical application), etc. There is no "explanation" for each piece of unlabeled data -- it just contains the data, and nothing else. Labeled data typically takes a set of unlabeled data and augments each piece of that unlabeled data with some sort of meaningful "tag," "label," or "class" that is somehow informative or desirable to know. For example, labels for the above types of unlabeled data might be whether this photo contains a horse or a cow, which words were uttered in this audio recording, what type of action is being performed in this video, what the topic of this news article is, what the overall sentiment of this tweet is, whether the dot in this x-ray is a tumor, etc. Making Machine Learning models that learn from labeled data means formulating a "supervised" problem, while with unlabeled data the problem is called an "unsupervised" problem. - If they aren't labeled, is it possible to label them? The most powerful results of ML applications (vision, language understanding) require huge amounts of labeled data, and that the labeling issue is known as the biggest bottleneck of modern ML applications. How long does it take to manually classify 100,000 documents, even if you just have to choose between "Type A" and "Type B"? Labeling is a tiring and often tedious job, and it takes a lot of time to be done well. Services like AWS Amazon Mechanical Turk (opens new window) or the Google AI Platform Data Labeling Service (opens new window) (like many other tech vendors) provide distributed groups of workers with instructions on how to label a dataset. There are also free tools (Annotorius (opens new window), LabelMe (opens new window), LabelBox (opens new window)) that allow you to label yourself or work with a team. It's time to call your little brother and promise him 1 cookie for every 10 tagged examples. Often labeling a dataset is a complex and expensive issue, and its qualitative success is crucial to the success of the project. In this article (opens new window) you can find 7 additional ideas to lower the cost of your labeling efforts. - How reliable are the labels? To obtain high-performance Machine Learning systems it is clear that large amounts of data are needed. However, it is equally important that the data is correct, especially in supervised learning applications. Even if you have a lot of examples in your dataset, you could do very little to get acceptable performance from the ML models trained on them, if the data quality is low. The intrinsic noise of the dataset is impossible to eliminate and will affect both the training phase of the model and in the phase of predictions of new examples as well. Entrusting labeling to working groups distributed through the services of large tech vendors requires that the labeling task must be simple. It is rather simple to distinguish between categories of clearly visible animals or the colors of a dress or simple sounds (if clear labeling instructions are provided), it is very difficult to distinguish for example various different species of birds, or the words of a specific language (for someone who doesn't know the language). Some labeling tasks are simply not obtainable through "crowd-labeling", such as diagnosing medical images or classifying complex documents, which require a deep knowledge of the domain, and a lot of practical experience. - Is it possible to put the data altogether? Often the data sources from which the data comes are heterogeneous and fragmented: a company often divides its data into "silos", making a silo for each business process. Or in general, we may have to merge several datasets and figure out how to do it. Does this lead to the question: do the data have the same format? Do they refer to the same period or to different times? Have they been collected in the same way? - Is this sensitive data? Often the public datasets that are used for ML applications do not contain sensitive data (such as identity, medical records, crimes), but sometimes when we work with real-world data they carry with them sensitive information, and we have to worry about managing it. Unless you need to develop an ML application to customize your user experience (and therefore personal data is critical), you can generally delete it without too much trouble. For example, you can "make anonymous" a dataset of medical records (or X-ray images) by deleting the names and details of patients. If I want to train a bone fracture classifier, I don't need to know who the skeleton in question belongs to! (Or rather, it could be useful and additional information, but it is not essential). - Can we achieve our goal with this data after cleaning and processing it? This question is very general and difficult to answer, especially for a novice! But we must try to imagine: after having cleaned and prepared the data, will their final form be "learned" by an algorithm? To learn how to answer this question, the only solution is to gain experience. So it means that it will happen many times that maybe we work on the data and clean it, and then we realize that there are no satisfactory methods to learn from them and get acceptable results. But don't despair! Every time you make this kind of "mistake" (inability to assess a-priori the feasibility of an ML application) you learn a lot, and soon you'll be able to assess in advance if your efforts will be well rewarded, before spending whole nights labeling examples! - How much data is there (number of examples, storage requirements)? The more data you have, the better. How many times have we already repeated it? Knowing how much data you have available is crucial, and understanding how much data you need (about) to make an ML algorithm learn satisfactorily is even more important. Often the examples have to be in the tens of thousands range to get enough satisfying results, but the state of the art can usually only be achieved with hundreds of thousands of examples. This number, however, is very empirical, strongly depends on the complexity of the task, and can be reduced through the use of Transfer Learning (which uses pre-trained models and "adjusts" them slightly). - Can we augment the dataset? There are many ways to get more data than you have. We will see in the next technical guides both of data collection (scraping, search engines) and techniques of "data augmentation" that allow increasing the size of the dataset. For example, suppose we have a dataset of 10,000 images of fruit. We could add the images themselves but slightly transformed, with small rotations, cuts, or fields of contrast and brightness. Adding completely new data generally helps the algorithm to generalize better with data it has never seen, while doing "data augmentation" generally increases the robustness of the system (it tends to be less wrong because of distortions or low quality of the example it is examining). If we answer these questions, we should have a clear picture of what data we have available. We can now proceed to the definition of the problem. # Set objectives and scope Like any other software project, a data science project needs to have a clear goal to reach. Formulating this objective is crucial for various reasons: - the success of the project - the measurability of the success of the project - setting a clear target does not create too high an expectation I suggest that you use the SMART methodology for projects, which consists of formulating them in a way: In particular, defining the scope of the project is fundamental. The scope is the size of the project, the number of components of which it is composed. Projects often fail because you don't have a clear scope and you continue to add more components until you have lost sight of the initial goal. - "I want to build a user interface that understands the natural language and interacts with the user" is too generic. - "I want to build a user interface that can take orders from the pizzeria and organize them according to the arrival time" is a much more defined scope. - "This system must help us classify all the new data that arrives" is too generic. - "This system must classify the images that users send us according to the color of the object photographed" is specific. Trying to formulate the problem in a "SMART" way helps you not to lose sight of the objective, not to set yourself unattainable objectives, to reach them in time and above all to realize if you are working well or badly (measurability). We will see in the next guide "Choose the metrics" how this aspect is fundamental. Highly recommended reading the original paper (opens new window) of the SMART projects. This mini-course from Google (opens new window) will help you frame your problem. Do it. This (opens new window) is a very good example of framing an NLP (Natural Language Processing) project. Take a look at the entire Kaggle youtube channel, there's a lot of learning material! In general, it is extremely useful to produce a document summarising all the answers to the questions, so as to provide a clear view of the project as a whole, what are the final objectives, and the most important characteristics for the system. This list is not exhaustive, and as you get a question in your head you feel free to pull a request to this file. A Virgilio member will take over your request and enrich this guide with your contribution. In the next guide usage and integration we will see other key questions we have to think about, like the usage of the system, its integration, and more!
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00228.warc.gz
CC-MAIN-2021-49
17,544
89
https://eosio.stackexchange.com/questions/1339/why-is-ram-usage-different-when-creating-same-accounts/1695
code
I am trying to understand why ram usage is different for the same task. cleos system newaccount --stake-net "0.0010 EOS" --stake-cpu "0.0010 EOS" --buy-ram-kbytes 3 It succeeded but the creator account ram usage went up by 0.3 KiB on the first account creation, the second time I tried to create another account with the same command and parameters, creator ram usage went up only by 0.1 KiB. Why is it different? I thought I would be charged 3 KiB everytime, since I specified 3KiB? (I know standard is 8)
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00636.warc.gz
CC-MAIN-2022-05
506
4
https://anninhhiendai.com/archives/93
code
Posted August 18, 2018 09:19:03It’s been a while since I posted, but this time around I’m happy to announce that the latest video camera stock update will be out tomorrow, August 20. The update is packed with a number of new features and improvements, as well as a couple of bugs fixed. In addition to the new camera store and camera images, I’m also releasing a new video camera images plugin for the plugin manager. This plugin makes it easier for people to import images from the camera store or camera images from plugins. The new version of the plugin also includes a bunch of bug fixes and a couple other improvements, which should make it a great plugin for any plugin manager out there. More info: https://www.youtube.com/user/jerusalempost/posts/qk7p6c9l5b3d5d0xk7f7v5zg7b New video camera files: http:/ / youtube.com / search?q=video&s=video%3A%2F%2Fr%2Fi%2FI&ie=UTF8&client=safari&q=&f=%22video%2Flibrary%2Fa%2FToggle%2FSearch%3Fq%2D&sort=byDescending&t=0x100000&src=http%3E%2E%22youtube.gadgets.com%22&clientId=7f6f6e1f6cad0f2c9d5cdf4f1d5ffd4&v=1&hl=en&ct=clnk&client=”safari”&srcUrl=https://youtu.be/qn6jgfjYtWg&clientProfile=&vbPlayer=1 Video camera images: http: / youtube . com / search ?q=image&s=-video%26F%26fmt%3D4%26type%3Ds%26format%3DD&ie = UTF8&cad=0&rs=19 More info at: http : / youtube / video camera update
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00396.warc.gz
CC-MAIN-2022-21
1,343
7
https://knowledge.broadcom.com/external/article/88853/when-trying-to-install-applications-mana.html
code
After a new install on RHEL 4 the instace fails to start with an Awe-9999 Error. The error message should be similar to: AwE-9999 Internal error (8/23/12 1:19 PM) java.lang.UnsatisfiedLinkError: /opt/rona/appuc4/c/libnativecalls.so: /opt/rona/appuc4/c/libnativecalls.so: requires glibc 2.5 or later dynamic linker at java.lang.ClassLoader$NativeLibrary.load(Native Method) Please invoke the sosite file on the instance and then issue: If you are using 64 bit java you will need to install 32 bit and point the instance to it. Please install 32 bit java on the server and point the instance to it as follows: the site for downloading java is as follows: Please download 32 bit 1.6 java and install on the server. Please then add the follwoing to your sosite file located in the master's site directory: JAVA_HOME=<path to java directory>/jre;export JAVA_HOME modify <path to java directory> to the path to the new JRE. Please also go in to the master's $AW_HOME/c directory and copy the 32 bit libraries over cp $AW_HOME/c/libnativecalls.so.32 $AW_HOME/c/libnativecalls.so Please then shut down the instance, re-invoke the sosite file, and then restart the instance.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00352.warc.gz
CC-MAIN-2023-14
1,165
15
https://dollypythonvintage.com/page/2/
code
Every wicked witch needs one!! At our Original Haskell location!! $50. Old Ron. On Cufflinks and a Tie bar in gold bling. In a fancy box. Oh you shouldn’t have!! $38. At Dolly On Bishop! What a gem of a 1960’s bag! Dead stock from the infamous Century Plaza Hotel in California! $150. At our little sister Shop~Dolly On Bishop!! A pretty great Light Up Beer Sign for where ever you drink your beet! $125! At our Original Mothership Haskell location! Very nice brass candlestick featuring the world. $58. At our original Mothership Haskell location! So modern! Very cool retro chair $125. At our original Mothership location on Haskell!! Here’s one of our favorites! Dawn is a doll in a very cute 1970’s style suit! We get one of a kind vintage fashions daily at both our locations! It’s an original 1988 light up Bud Lite Spuds McKenzie lamp! At our Oak Cliff~Bishop Arts District Shop~Dolly On Bishop! $225. One great Iconic LARGE framed photo of what made Ft. Wort Texas famous. $88. At our original Haskell location!! Very Mod and Very Vintage ‘Museum Quality’ Fur Coat! $485 at our original Haskell location!
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00269.warc.gz
CC-MAIN-2021-39
1,126
10
http://tarc.exeter.ac.uk/people/centremembers/arun_advani/
code
Arun Advani is a PhD candidate in economics at University College London, and PhD Scholar at the Institute for Fiscal Studies. His research focuses on the role of interactions, particularly networks, on public policy questions in both developing and developed country contexts. He has worked on the role of interactions in influencing the returns to migration, the benefits of microfinance programmes, and the effects of tax audits. Between 2011 and 2013 he also worked on environmental economics, and the design of public policy in meeting carbon redution targets, at the Institute for Fiscal Studies. Over the same period he was an Associate Fellow of King's College, Cambridge, where he taught microeconomics. His earlier studies were in Economics, at the University of Cambridge and University College London.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315321.52/warc/CC-MAIN-20190820092326-20190820114326-00341.warc.gz
CC-MAIN-2019-35
813
3
https://www.pradipsikdar.com/clavier-studio/
code
Music Director | Sound Engineer | Composer | Keyboardist | Voiceover Artist Welcome to my “Clavier Studio”. 🎚 🎛 🎙 🎧 🖥 In this channel, you will enjoy different kind of performances, both casual and feature rich, Unplugged and plugged, Folk and Electronic, Recitation, even some funny numbers. 🎹 Please subscribe my Clavier Studio! You know it well what the next line would be! right? 😊 Please subscribe the Youtube channel and click the bell icon to get all the notifications. I promise, you won’t regret it! Would you like to be featured over here? No problem! Please contact me. for further details. In the mean time, enjoy the releases below and share them with your friends!
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506028.36/warc/CC-MAIN-20230921141907-20230921171907-00227.warc.gz
CC-MAIN-2023-40
705
6
https://www.advancio.com/blog/category/advancio-pays-it-forward/
code
It seems we can’t find what you’re looking for. Perhaps searching can help. We are a software development company that believes in the power of people. Together we deliver cutting edge software solutions that increase profitability, maximizes efficiency, and provide enhanced user experience. Come experience the Advancio difference!
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594018.55/warc/CC-MAIN-20180722213610-20180722233610-00062.warc.gz
CC-MAIN-2018-30
337
2
https://cholos.newgrounds.com/news
code
I know, I know. I've been dead and slacking but I thought I'd do something good, maybe better. I'd make a 3d realistic madness combat characters and maybe make the whole madness thingy more alive. Allthough I might start working on it in the future....When I find a better laptop that doesn't crash all the time. But for now that's all i have so...
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00507.warc.gz
CC-MAIN-2018-39
348
4
https://forums.getdrafts.com/t/manage-multiple-todo-lists/8557
code
I have written an action which allows to manage multiple ToDo Lists. Just create a new draft and write the todo items (one per line). Select the action and you will be prompted to select one of your existing todo lists (they need to be tagged with “todolist”). Your items will be appended to the selected list as todo checkmarks. Hope this will be useful.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00464.warc.gz
CC-MAIN-2020-50
359
3
http://bio.net/bionet/mm/dros/1999-February/004572.html
code
|Biosequences .. Software .. Molbio soft .. Network News .. FTP| I'm doing a science project. I have some very interesting specimens that i would like to preserve because at the current time being they are rotting away and they are a integeral part of my project. What do i do? Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00614.warc.gz
CC-MAIN-2020-50
285
2
https://forum.opencv.org/t/there-seems-something-wrong-about-depth-calculation-in-cv-stereoreconstruct-function/13328
code
I am confused about the implementation of absolute depth calculation in omnidir.cpp, which is different from the original paper (Binocular Spherical Stereo). specifically, the depth is calculated as follows: float depth = float(baseline * f /realDis.at<float>(j, i)); but in the original paper (Binocular Spherical Stereo), the absolute depth is calculated as follows: where \rho_r is the depth of left image. By the way, another repository pdi which provides a solution to get the panorama depth image from a single fisheye stereo image pair, uses the above equation from original paper to calculate depth. I am wondering if I am missing something? or are these two implementations equivalent? or is it just a bug? Looking forward to your reply !
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510481.79/warc/CC-MAIN-20230929022639-20230929052639-00715.warc.gz
CC-MAIN-2023-40
747
9
https://elixir.support/t/cannot-go-back-to-blog-summaries-alloy/3257
code
I have setup Alloy. Everything is working fine, but when I have rwad the full blogpost, the link ‘return to summaries’ gives the following message: You don’t have permission to access this resource. I have setup the Blog inside a folder. (…/Blogmap/Blog/blog.php) The link doesnot go to blog.php as expected, but to …/Blogmap/Blog/ - it doesnot seem to see the blog.php. Your page’s file name should be index.php not blog.php Also, don’t forget to do a quick search of the forum. It’ll often save you having to wait on me or someone else to reply. This is a topic that’s been covered a few times before. One example: I tried to find a post on this matter, but I guess I chose the wrong words. For that search I simply used Go Back for the search term since it was in your thread title. I am not that smart, and a Dutch-man : )
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00280.warc.gz
CC-MAIN-2022-33
844
10
https://forums.comodo.com/t/sandbox-not-logged-on-as-admin/259069
code
can you help me with the following problem. i run setup for a new program cis pops up a warning recommending the sandbox. i click the sandbox button. i get a message saying i ‘must be logged in as administrator to use the sandbox’ ok, im the only one who uses this computer and i always log in as admin. how can i get the sandbox to work?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818468.34/warc/CC-MAIN-20240423064231-20240423094231-00827.warc.gz
CC-MAIN-2024-18
342
8
https://github.com/achindra/Anuvadak
code
Translates the world around you in Hindi This app let's you take a picture or load one from your gallery and translate text to Hindi. To make this application work, you need to create following keys and update in App.xaml - Azure Computer Vision API Key - Azure Translator Text API Key - Cloud Translation API (Optional) This app has a toolbar (kind of) at the bottom that is scrollable. - First button is to load image from gallery - Second button is to take a picture from camera and load - Third is a slider that fills the background with grey color when translated text is superimposed on the image - Fourth is a reload button, it uses the image that is already loaded - Fifth is a color picker for translated text - Sixth is a font resizer - Seventh let's you switch to Google translator - Eighth let's you load just the translated text. It also displays any exception. (If you don't get translated result, switch to this view.) App is available for download at https://play.google.com/store/apps/details?id=com.Gigabits.Anuvadak
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00089.warc.gz
CC-MAIN-2021-21
1,034
16
https://experienceleaguecommunities.adobe.com/t5/adobe-analytics-questions/segments-do-you-see-a-more-elegant-way-to-do-this/qaq-p/251797
code
The purpose of this segment is to exclude 'suspicious activity' We've noticed on a voting application where a few particular GEO regions are generating an astounding number of page views and repeat visits. We cannot exclude these GEO regions altogether, but have picked some generous numbers to define 'suspicious activity' Exclude visitors where they have generated over 500 visits within one week Exclude visitors where they have generated over 200 page views within one visit I'm finding that this segment is adding some overhead to reporting because it has a lot to filter out, and has WITHIN operators You could remove the second "within" clause and just say "Visit container where Page Views are greater than or equal to 200" instead. The visit container implies that the page views happened within the same visit. That would cut things down a little bit. The sequential segment wouldn't make sense there, since it is saying the user must first have 200 page views then have 500 visits. You'd probably want the 'then' to be an 'or', then use a nested container to specifically call out 500 visits. Otherwise the suggested segment looks great. If the desire is to weed out fake voters.... I would do this. Set up an event that is fired upon vote completion.(event triggered by lets say Vote button or complete your ballet button) Now each vote will be tracked by the instance of the vote occurring. Next apply a correlate report for this against IP address. Now you can see specific IP address that completed a vote. If its a shared IP address again break it down by screen size or browser to verify different users. Then you could even setup an alert for when IP address equals more than 1 vote / hour or day etc...
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00559.warc.gz
CC-MAIN-2021-17
1,721
11
https://discourse.mozilla.org/t/use-wikipedia-to-build-language-model-issue-with-size-of-txt-file/34656
code
I am using Deepspeech for India English identification and I’m currently using 0.3.0 model (official release). With the existing LM, the words are not being generated properly, although the dialect is catching up. I am training from the checkpoint with India specific voice. For the LM, I have downloaded Wikipedia Dump (English). On treating the data (removing spaces, punctuation, etc) the size has come up to 14.4 GB (of text) of “txt” file. Now, I have tried generating LM (using KenLM) on a few 110-150MB data and the resulting ARPA file is about 1,2-1.6GB (& binary files 50% of that size). Now, is it practical/possible to use 14 GB of text file to generate the necessary LM? If that’s the case, do we need a 100GB RAM? (ARPA/Binary files must reside in RAM while being used by DeepSpeech, right?) Kindly suggest possible remedy and how to proceed? P.S.: I’ll have to add about 1 GB of text in future for some field specific texts
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583730728.68/warc/CC-MAIN-20190120184253-20190120210253-00416.warc.gz
CC-MAIN-2019-04
947
5
https://developpaper.com/batch-bat-starts-the-program-on-a-weekly-basis/
code
@echo off set no=%date:~13,14% If%no%==1 goto:open If%no%==2 goto:open If%no%==3 goto:open If%no%==4 goto:open If%no%==5 goto:open goto :eof :open start "VPC" "E:\Program Files\Microsoft Virtual PC\Virtual PC.exe" start "MSN" "C:\Program Files\Windows Live\Messenger\msnmsgr.exe" :: Start "QQ", "D: Start Tencent QQ" ::start "sms" "D:\Start\Fetion 2008.lnk" start /min "OUTLOOK" "Outlook" Start/min "Jinshan Ciba", "D: Start Jinshan Ciba 2007" goto :eof - Microsoft Win10 RS5 Fast Preview Version 17704 Push: Edge Browser Change (with updated content) - Two methods of displaying native IP address in bat script (intranet ip) - Is blocker. exe a secure process? Is blocker a virus? - How to set Delivery Optimization for Win10 RS5 Preview Version 17704? - Batch bat sets fixed IP address and automatic IP address - What is bmmlref.exe? What is the function of bmmlref process? - How can Win7 achieve long-term timing? - Use of MSG commands in detail - What is bmupdate.exe? What is the function of bmupdate process? - Ways to Avoid Overbandwidth Occupied by OneDrive Upload/Download in Win10
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00508.warc.gz
CC-MAIN-2019-35
1,091
11
http://www.dslreports.com/forum/r27960382-MN-Slow-Downstream-Upstream-Ok-
code
Been getting some weird issues here... All started out with bad signal levels, yesterday they replaced my drop, they got the signal levels where they should be and the connection hasn't dropped. Motorola SB6180 DOCSIS 3.0 Frequency 111000000 Hz Signal To Noise Ratio 38.9 dB Power Level 2.9 dBmV Channel ID 2 Frequency 32000000 Hz Power 40.7 dBmV I have two of this model modem, i have swapped them, no change. Also other customers in this community, eight of which i have seen in person, are also getting slow speeds. The DOCSIS 2.0 modem owners are getting around .3-.5mbps downstream, the DOCSIS 3.0 users are getting around 2-3mbps thanks to channel bonding. I even went so far as to go to a friends house and swap their RCA DCM425 with my spare SB6180, their speed jumped from .3-.5mbps up to around 2mbps. I then swapped back to their modem and their speed dropped back down to .3-.5mbps. So its easily replicated. Here is some of my latest speed tests. The Upstream seems to be ok, its supposed to be 2.5mbps up, and i generally meet that number or even better no problem. The downstream on the other hand sucks. I am paying for 30mbps and i'm getting a tenth of what i should have. If i was at least averaging 20mbps i would be satisfied, but at these speeds the local DSL provider is faster, and thats saying something!!!
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131309986.49/warc/CC-MAIN-20150323172149-00150-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
1,330
13
https://ubuntu.social/interact/102300116272006484?type=favourite
code
#Kubernetes 1.15 is here! Read about the notable upstream release here plus Charmed Kubernetes and microK8s updates https://blog.ubuntu.com/2019/06/19/kubernetes-1-15-now-available-from-canonical Why is this step necessary? ubuntu.social might not be the server where you are registered, so we need to redirect you to your home server first. Don't have an account? You can sign up here
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00118.warc.gz
CC-MAIN-2019-47
385
3
https://whatisseminar.xyz/talks/20110128.html
code
Social media are omnipresent today! But what is the role of social media in mathematics? This question is still wide open and in this talk I want to give a glimpse of the developments in this area. The focus will lie on mathematical blogs, but mathoverflow, the polymath project and the use of blogs in other sciences will also be addressed. Moreover, I will use the opportunity to present Mathblogging.org, an aggregator for mathematical blogs that can serve both as an index as well as a starting point for exploring the mathematical blogosphere.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511170.92/warc/CC-MAIN-20231003160453-20231003190453-00055.warc.gz
CC-MAIN-2023-40
548
1
http://games.seiha.org/viewtopic.php?f=14&t=621&sid=83b25f24e0ad4c08876da7455bf9880f
code
tools to rip cg/sprites and voices? I am a 3d modeler (student). I would like to know which tools are used to rip sprites and voices, I've finished the game and it would be awesome to make taiga touma in 3d, and perhaps other characters and have them fight. Also for those that have finished the game and played destiny's harem/crea route (I haven't yet, don't spoil me Also I would like to rip taiga's battle sounds = (even if it means searching thousands of files) anyway thanks for reading and forgive my engrish.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00575-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
516
5
https://usethis.r-lib.org/reference/use_github_release.html
code
Draft a GitHub releaseSource: Creates a draft GitHub release for the current package. Once you are satisfied that it is correct, you will need to publish the release from GitHub. The key pieces of info are which commit / SHA to tag, the associated package version, and the relevant NEWS entries. If you use devtools::submit_cran() to submit to CRAN, information about the submitted state is captured in a CRAN-SUBMISSION or CRAN-RELEASE file. use_github_release() uses this info to populate the draft GitHub release and, after success, deletes the CRAN-SUBMISSION or In the absence of such a file, we must fall back to assuming the current state (SHA of HEAD, package version, NEWS) is the submitted state.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00682.warc.gz
CC-MAIN-2023-06
706
11
https://forge.codesys.com/forge/talk/Visualization/thread/4f45d39f3d/
code
I have a question that I haven't found an answer for. When I first started using Codesys it was Codesys 2.3 and my supervisor told me that if a visualization was going to make use of a variable that it had to be a Global Variable. I don't even know if that was good advice then. Of course now I'm using Codesys 3.5. Does it have any effect to use local variables from a POU vs Global Variables vs using Visualization variables? On the main page would it matter if variables from a bunch of different POU's are used, some from different tasks? I've used variables from different POU's for the same visualizations and haven't had any problems so far. Log in to post a comment.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00285.warc.gz
CC-MAIN-2020-40
674
6
https://www.fanfiction.net/s/6068038/27/Yuffentine-ABC
code
! is for Closing and Afterword Well, here it is, folks. This has been my first drabble collection (with A is for Accusation being my first ever drabble). It's been a pleasure for me to write, but also a learning experience. After a reviewer showed regret that this would be ending, I decided that I may well do another collection of a similar nature. I'm still working out details, but all I know is that I'm going to have each chapter in it be between 50 (dribble length) and 500 words. If anyone has any prompts for it, send them my way! On that note, thank you to all the people who took the time to review (or even just to read) these little things. With regards to wordcount, all these are 100 words; the word-counter on is a little bit dodgy. (For example, it adds quite a few words when you put in a horizontal ruler.) Excluding title (invariably four words) and author's notes (of which I used as few as possible), they are all 100 words. (That is, of course, with hyphenation and everything to make it more confusing.) My greatest thanks to anyone reading this, and I hope you've enjoyed the story. A/N: On a side note, skipping the title and this AN, this chapter is exactly 200 words- known as a drouble. And, yes. I'm very much aware that no one cares.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567042.50/warc/CC-MAIN-20171215060102-20171215080102-00198.warc.gz
CC-MAIN-2017-51
1,264
6
https://community.spiceworks.com/topic/185995-adding-a-return-key-to-install-script
code
I'm writing a quick bash script to install Postfix onto an Ubuntu Server and was wondering if I could add an enter key to automate the install. Right now I start the script and then it runs until it asks for input (see attached screenshot). I want it to just hit enter and keep rolling. Any easy way to do this? Thanks! I don't suppose it's possible the way you desire, since script passes the focus to postfix installer and thus cannot send any keys to something it has no focus on. But here comes howto about preparing unattended installs for Debian and Ubuntu: Perhaps will be helpful. I think that is what I'm looking for. Now I just need to figure out how to work that. I finally found this link. I think for my specific situation, it helped me out more. Just sharing in case anyone else is looking.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.37/warc/CC-MAIN-20161020183839-00105-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
804
6
https://oobrien.com/category/openstreetmap/page/5/
code
Peter Reed has been doing some excellent work comparing the Department of Transport measured road lengths by English county, and comparing them with the total lengths of roads in OpenStreetMap. This has only recently become possible to do with spatial data purely in OpenStreetMap, because the English counties have now been completely added to the project. This last step was harder than it might seem because there is no freely available definitive source for boundaries in the UK (which is just plain odd.) Instead, it was necessary to use a combination of local knowledge, examining signs and council objects on the ground, and trace from out-of-copyright maps, to form the boundaries. Peter’s choropleth map is excellent and deserves a wider audience, here is a smaller version of it (click through to see the large version, which may also have been since updated.) Choropleth of OSM road coverage vs Department of Transport figures, by Peter Reed. It is encouraging to see many areas at nearly (or over) 100% coverage, there are a number of reasons why coverage might be more than the Department’s own figures, due to more up-to-date information, counting of slip roads, and mis-tagging private roads as public on OSM, so the map’s figures should be taken with a pinch of salt. The Welsh and Scottish county boundaries are not yet complete in OpenStreetMap so the coverage analysis cannot yet be completed. Muki has also (with a student) done OSM road coverage analysis, using equal area blocks rather than county-based units.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00403.warc.gz
CC-MAIN-2021-49
1,538
7
https://codedump.io/share/FnqXegPkAOsn/1/php-mailing-only-work-with-my-servermail
code
I've already check link on stackoverflow and i did not find answer. My issue is : I have a website with a contact page. It takes the following fields : When i click on submit, i get an email only if the email address is that of my personnal server. I use apache2, php7.0. I don't how to fix this issue and always send contact mail with my private adress. Sorry for my english and thanks for answering !
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00418.warc.gz
CC-MAIN-2021-04
402
8
https://community.spiceworks.com/products/44334-optiplex-3010/reviews/review/599979?page=1&rating=2
code
I replace most of the workstations in the company with 3010's last year, and I'm actually quite happy with them. I'm not happy with Dell, however. While most companies categorize us as a "medium-sized business", Dell determined that we were "small", and since that determination, we have constantly been thrown new sales reps, seemingly every time I try to contact ours, I find out we have another new one, and it can take extra time just to get in touch. What does this have to do with the 3010's? Everything. I wanted to order two more last month. I couldn't accessorize it the way we always have in the past on the web site, so I got our rep on the phone. It turns out that we cannot customize them now, we have to take them "off the shelf". I can't upgrade the mem to 8GB (from 4), and I can't add the Radeon HD-6350 anymore for dual-monitors. All I'm being told is: "You can't do that", and "We don't do that anymore". Sorry Dell, I can't do it anymore either then.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629632.54/warc/CC-MAIN-20210617072023-20210617102023-00003.warc.gz
CC-MAIN-2021-25
970
4
https://gis.stackexchange.com/questions/91784/creating-many-uneven-buffers
code
It looks like your urban areas is a Boolean raster. If so, I would vectorize the raster as the simplest means of getting the urban area 'buffer' to match the 'jaggy sprawl'. You can then do a spatial join of the urban points data on your new polygons to provide the polygons with attributes. If the raster is not Boolean and is in fact a grey scale, then you have a number of options. You could just 'Booleanize' it but you may want to take the opportunity to handle how parts of the urban sprawl merge into each other. To do this, you could perhaps set a threshold (your experience with the data is the best guide - I can't suggest a value fro the picture) above which, you treat the areas as urban and below which you ignore it. This would have your polygons match more closely to the points. Alternatively, you could categorize the raster by grey-scale value and you may be able to investigate the possibility of differentiating peri-urban/village vs urban vs down-town districts based on the grey-scale value. If the grey-scale is based on night-time illumination, this might not be shuch a bad option.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00263.warc.gz
CC-MAIN-2019-43
1,106
2
https://www.csee.umbc.edu/category/graduate/page/3/
code
Workshop on Solvers for Large, Sparse Linear Systems Monday and Tuesday, 17-18 July 2017 Engineering Room 022, UMBC UMBC will host a free, two-day workshop for faculty and students on solvers for large, sparse linear systems on Monday and Tuesday, July 17-18 in Engineering 022 at UMBC. Thanks to UMBC Prof. Matthias Gobbert for organizing and to University of Kassel Prof. Andreas Meister for presenting. If you plan on attending, please RSVP online. The simulation of real life applications possesses a crucial importance in a wide variety of scientific as well as industrial areas. Thereby, the performance of the whole numerical method is often decisively depend on the properties of the incorporated solver for linear systems of equations. The course provides a comprehensive introduction to both classical and modern iterative solvers for a stable, efficient and reliable solution of linear systems and is design for students from many disciplines, including Mathematics, Engineering, Physics, Computer Science, Computer Engineering and Electrical Engineering. The course content covers - Introduction to basics from numerical linear algebra - Splitting methods - Multi-grid schemes - Krylov subspace methods like CG, GMRES, BiCG, CGS, BiCGSTAB The lectures will be accompanied by practical exercises in MATLAB. Monday, July 17, 2017 Tuesday, July 18, 2017: The workshop will be presented by Prof. Dr. Andreas Meister from the Institute for Mathematics, University of Kassel, Germany. He is an internationally renowned researcher in Numerical Analysis with a specialization including iterative solvers for linear system of equations. These methods are modern and form the basis of all numerical kernels in modern software, such as COMSOL, Matlab, PETSc, and many others. Prof. Dr. Meister has taught classes at UMBC during Fall 2013 when he spent a sabbatical at UMBC as part of the partnership between UMBC and the University of Kassel in Germany. This workshop is hosted by the UMBC High Performance Computing Facility. Light refreshments are graciously sponsored by the UMBC Division of Information Technology.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529080.43/warc/CC-MAIN-20210122020254-20210122050254-00530.warc.gz
CC-MAIN-2021-04
2,119
16
https://link.springer.com/article/10.1007%2Fs00224-013-9504-x
code
A Note on the Parallel Runtime of Self-Stabilizing Graph Linearization - First Online: - 188 Downloads Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and best-case parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426693.21/warc/CC-MAIN-20170727002123-20170727022123-00279.warc.gz
CC-MAIN-2017-30
1,435
4
https://www.moddb.com/downloads/ceds-assets-pack
code
Ever wanted new weapon animations, new gameplay features, and heavily improved game experience for Doom 3 among other things? You may want to check this out. Take a look at what is inside! -> So you may be wondering why this is linked to my mod S.T.A.R 1088? This pack has started as a collection of various features available in this mod. The result: Many of the internal functionalities are now free for all of you to use! Keep reading for more information. 1) This pack is meant exclusively for single player, it was not meant for Multiplayer use. 2) This is meant to be for PUBLIC USE. You have permission to do whatever you want with this pack. 3) It uses no custom code, so you can implement it to other mods of yours which are using a custom dll. 4) Note this doesn't include any kind of graphical enhancements. While building my WIP mod for Doom 3, I thought about uploading many of the internal functionalities of the mod for people who wanted to include them into their own projects. Finally I put it all together to make it functional with vanilla Doom 3 too. Patches are cumulative, but they do not exceed the 2 MB, for which reason they are easy and fast to download. It is highly unlikely this pack is going to be updated again. FOR THE GAME: It's good for giving Doom 3 another go. It uses no custom code so it won't bug anything if you place this directly into the "base" folder. It can be run as a mod (with its own folder) or be run directly with Doom 3 by placing the PK4 files in the "base" folder. FOR MOD DEVELOPERS: You can use the entire product or part of it for any mod of your own. You can include this as a whole (or part) in your own campaigns. You can do any modifications to this pack as you want. I'm sharing the pack "as is". Changing anything in this pack is up to you. You can mix other mods with it. If you're not experienced in the Doom 3 modding world, always make backups. Contents: (Significative additions, amongst others) -Balanced weapons, where everything from caliber to weight is taken into account! -Balanced inventory, where you can no longer hold 50 grenades and hundreds of ammunition (This doesn't affect carrying all your weapons) -Done the scope functional in the rocket launcher and it has 2 magnifications. -Done heavy new scripting work, counting the removal of redundant lines to the addition of many more for new functionalities. -Created new-from-scratch animations for almost every weapon in the game! Adding exclusive animations such as running cycles, fire mode switches and more! -Added scavenge feature! Searching corpses gives you a wide variety of useful items, which range from a single shotgun shell to a full dose of health or item combos! -Added iron sights modes for most of the weapons. -Added different fire modes for some weapons. -Added a flashlight to the shotgun. Its light is very short-ranged, it uses no bateries and can be toggled ON and OFF at will. -Added a Guide Screen to the mainmenu. (Idea inspired by BrutalDoom, RGH and the first Doom games themselves) -Changed many damage properties, where enemies with body armor like the security guards take less damage in the chest, as an example. -Changed damage produced by headshots. They are lethal and play a vital role for ammo conservation. -Changed muzzleflashes, explosions and most of the weapon particles for better looking ones. -Disabled "auto-reload". Are you a badass marine or what? (Idea inspired by SWAT 4). -Disabled shotgun looping-reload cycle. Each shell at a time. (Idea inspired by SWAT 4). -Retouched mainmenu (Fixed the Options screen, so it fits with the new functionalities. Added a guide screen) -Retouched HUD (Added certain things like fire mode indicators) -Enhanced weapon accuracy. The faster you shoot the higher the spread penalty is. Aiming affects accuracy as well. -Enhanced player movements, where heavier weapons affect your agility. -Enhanced player footsteps sounds, changed by 16 new ones. Some more changes are present in this pack! Other game fixes: -Weapons are lowered when climbing. -Weapons can now be switched while reloading. -Fixed bug where gunshot sounds would stop when switching weapons or your gun being lowered externally (e.g. by a script call) -Fixed a problem where the weapons offsets would be screwed if dying when aiming. (Though it was mod-specific, it was a huge bug nevertheless) -Fixed hierarchies in some of the weapon meshes. -Fixed some dissappearing decals such as blood splatters. Now they'll stay. -Changed many weapon meshes position/rotation and also done minor enhancements to some of them, so that they look better. -Changed the way ejected shells are handled (excepting for the shotgun brass) -Changed rocket ammo & rocket launcher world models so that they fit with the new rocket launcher. -Changed many weapon sounds for more agressive ones. -Increased explosions damages and radius. A grenade can kill you as easily as it could kill any mid-rank enemy
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00078.warc.gz
CC-MAIN-2022-33
4,963
57
https://forums.adobe.com/thread/671657
code
I know this has been asked before and been around for a while now, but I had to ask I am using the toggleButtonBar component along with viewstack. Currently I have like 6 buttons on my toggleButtonBar which I have tried to style using Flash. I know I can syle only the first last and the middle buttons, however I am looking for some sort of a solution to be able to style each button in the bar individually. I f there is no other way, are there any alternative ways apart from using the toggleButtonBar? Thanks in advance. Develop your own, similarly functioning custom control that you can style.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00244.warc.gz
CC-MAIN-2018-30
599
5
http://www.sudeepmandal.com/projects/reflection-mod-photoblog-theme-wordpress/
code
Theme Updated - 2nd October, 2010 (Reflection-Mod 1.2.1) Reflection-Mod v 1.2.1 Photoblog Theme for WordPress : Demo Reflection-Mod Logo for Header (Photoshop PSD File) If you'd like to see a good example of Reflection-Mod integrated with Refractal to form a complete blog+photoblog, check out Markus Hellmold's new site featuring a tight integration between both themes: Xtropolis (WARNING: Some photographs have adult content, so it isn't a work safe site) Changes in Latest Reflection-Mod (1.2.1) (October 2nd, 2010) - PHP 5.3 compatible: Thanks to Qlawy for a modified version of reflection-mod that got rid of some unsupported code. This should fix issues people were having with reflection-mod spitting out some weird code. - Better SEO support with title and alt tags for improved SERPS - Thanks to Harry on the forums Changes in Reflection-Mod 1.2 (January 31, 2010) - Lightbox (SlimBox) effects can be enabled for Category/Tag archive galleries: New in version 1.2 is an option in the admin panel to enable a lightbox effect for quickly browsing through images in your category/tag archives. - Disable/Enable automatic display of Shot Info (Post Text) with Photoblog Image: Admin panel option to enable or disable the automatic display of your Shot Info/Post text along with the photoblog image. Gives you this option depending on whether the image is of utmost importance or whether you prefer to display both the text and image as a single combined entity. - Enable/Disable EXIF link in Photoblog - Latest Theme Version Box in Admin Panel: The theme admin panel now displays the latest available version of Reflection-Mod with a link to the download page. Now you will always know if you have the latest version of the theme without having to actively check my site. - Implemented a multi-level CSS drop down menu: capable of displaying multiple levels of pages in the navbar. - Improved compatibility with older versions of PHP: Introduced an option in the Admin panel to turn of AJAX browsing of images which should make the theme compatible with practically all webhosts and PHP versions. Also includes JSON encoding for better PHP compatibility - Ability to change image quality: Implemented an admin menu option enabling the user to specify the image quality of images. Higher quality images will take longer to download. Can be optimized for a given webhost's speed/performance. - Support for a lower navbar: Make a separate unordered list in header.php with an id of "navbar2" to create a second set of hard coded links in the lower row of the navbar. Can be useful for integrating the photoblog with a regular text-blog running the Refractal theme. - Support for Category and Tag listings. When browsing images in a category, Prev/Next image links allow you to browse single images within the same category. I think this is a key feature missing in almost all WordPress photoblog themes (Even Pixelpost). Note: You need fancy permalinks enabled in WordPress for this to work. If you have the default permalink structure, in-category browsing will not work. - Widget ready sidebar (can be disabled): While I prefer to have a completely uncluttered look, I have enabled it to show how it looks. You can use any widget you like. - Random images hyperlink now displays a slick slider that pops down with 7 thumbnails of random images from your archive. This allows the user to select an interesting image out of an assortment of random images. - AJAX effects are persistent throughout the browsing process. In the original theme, clicking on Archives/Comments, etc would break the AJAX browsing mode and would switch to regular page loading. - Robust Admin Menu with tons of options! Now you can control a host of options from the Admin menu such as Copyright Information, enabling/disabling tags, categories, sidebar, reflection-effect, header page listings, etc. This means you shouldn't have to meddle with the actual theme code to get the look you desire. - Downloadable Photoshop Template for creating customized header logo for the theme. - MOST IMPORTANTLY: A brand new, matching text-blog theme - Refractal - that can be used on your website for a separate WP installation for the regular Blog component of your site. The two themes, Reflection-Mod and Refractal look and feel the same so it allows you to integrate your regular blog and your photoblog under one common umbrella! Before you install this theme, please make sure that: - You have installed the Yet-Another-PhotoBlog plugin correctly and that it is working. Follow the instructions on the plugin webpage to install the plugin properly. - Once YAPB is installed, go to the YAPB options section in the Administration section and make sure to disable all the automatic image insertion options for the plugin. If you fail to do so, YAPB will conflict with Reflection-Mod and things will behave very strangely. - Like Reflection, this theme requires every post to contain a YAPB image. If you wish to have your own content (without an image) add it in pages and not posts. Once you have done the above: - Download Reflection-Mod 1.2 Photoblog theme for WordPress and unzip the contents of the file. - You should now have a directory titled 'reflectionmod'. Copy this directory and all its contents into the wp-content/themes/ folder of your WordPress install. - Once all the files have copied over, select Reflection-Mod from the Themes panel in the Administration section and activate it. Creating your own Reflection-Mod Logo for the Header This is one of the first things you probably want to change. (I presume that having your own photoblog under my name is not a very appealing option =) ). To make this process simpler for you, I've gone ahead and uploaded a Photoshop template for the logo. Download the Photoshop Template for the Reflection-Mod header and open it in Photoshop. The Layers in the file are labeled and you should be able to easily edit the text in the Title layer. Copy paste the same text into the reflection text layer to get your final logo. Of course, you can go crazy and make a much more creative logo if you feel like it :). For those who don't have access to photoshop, it won't be as easy, but creating a logo is still simple. Just open up the logo.png file in /reflectionmod/images/ in GIMP (Opensource software equivalent of Photoshop). Use the eyedropper tool to pick the color of the 1pixel gray line, and create a new 1px line of that color on a new layer and position it over the line in the original logo.png image. Then follow a tutorial such as this one to create the reflection effect on your text. Finally save your image as logo.png and overwrite the default file in the /reflectionmod/images/ folder. Creating Archive/Mosaic Page If you wish to have a link in your Navbar titled Archive or Mosaic, which contains an array of thumbnails of images from all your posts (grouped by year), then create a new page and make sure to select the Mosaic Page Template instead of the Default template. Unless you want to have some text preceeding the thumbnail archive display, you can leave the body of the page empty. If you have the "Disable AutoPage Listing in NavBar?" option unchecked, the Navbar will automatically update to show a link to the Archive/Mosaic page. If you do choose to disable the AutoPage listing option, then you will need to go into header.php and manually enter the URL to the Archive page as an "a href" link in the Navbar unordered list. Creating an About Page Create a new page titled "About", but this time use the "Default Template" for the page. Enter whatever text you want (feel free to embed images too) and then save the page. As mentioned above, it will automatically show up in the Navbar if Autopage listing isn't disabled. Other Theme Customizations Most of the other admin options are fairly self explanatory. The first few options deal with the Copyright information that is listed in the footer. The portrait image width and landscape image width refers to the maximum width the images should have when displayed. Keep in mind that the value in these boxes should not exceed 800 as it will break the theme. (The theme is coded to have a maximum image container size of 800 pixels). The default value is 800 for landscape images and 450 for portrait images. By default, the Sidebar is disabled. When you first enable it, it will say something like "No Widget". You will need to add elements to the sidebar using the Widgets option in the dashboard. I am currently using the Subscribe Sidebar and the Yet-Another-Photo-Blog sidebar widget plugin on my photoblog. The latter is setup to display 5 recent thumbnails (100px width). Permalink Structure for your PhotoBlog I would strongly recommend that you use the fancy permalink structures for your photoblog. This will also enable in-category browsing of posts for this theme. The URL for your posts will look cleaner and the names are far more intuitive than a URL that looks like www.yourdomain.com/?p=XX . My favourite permalink structure for photoblogs is: /%category%/%postname%/ . Check the image below to see what your permalink settings should look like. To customize the Exif data that is displayed in the overlay panel, select Exif filtering in the YAPB options and select the different field values that you would like to have displayed. Reflection 1.2 along with Refractal is a new update, and one that I wasn't planning on ever doing, but made possible thanks to feedback from users and encouragement from a good friend (we have come to become friends due to this theme) Markus. So if you like the theme, let me know! I don't expect everyone to donate but it really does help a lot to know that there are folks out there who find this theme useful and enjoying it :). Also if there are any features you'd love to see, let me know. I can't guarantee anything, but you never know! - Johannes also deserves a huge thanks for his amazing Yet-Another-Photo-Blog Plugin for WordPress. - A big thanks to www.w3schools.com and all the other great sites on the net with some amazing tutorials that made modding the theme possible for a PHP/JS/WP newbie such as myself.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817106.73/warc/CC-MAIN-20240416191221-20240416221221-00756.warc.gz
CC-MAIN-2024-18
10,180
49
https://devsquad.io/job-list/5fc3b71db82b105bd8e1de84
code
Senior Frontend Developer (React JS) Our client is an international company which aims to apply technologies and innovative ideas to make life better. With their founder, an Israeli with about 20 years in software development, they are making challenging projects focused in E-commerce and development tools for clients from Israel, USA, Malaysia and other countries. They’re looking for talented people who want to learn a lot and build amazing things. Their say is "Born to Code, No plan B" and the intention is to hire top 10% talented software engineers in Vietnam. - Company type: Product and partnering - Country: Israel - Company size: 15-30 members - Workplace: District 1, Ho Chi Minh city, VN - Working Time: Mon - Fri - Analyze, review, feedback for product requirements and designs. - Participate in designing, estimating and implementing solution architectures. - Participate in coding and reviewing code of other members. - Research new products and technologies. - Create fully functional Web Apps or Mobile Apps. - At least 4 years of experience with ReactJS framework. - Familiar with MySQL, MongoDB, Redis. - Familiar with AWS, GraphQL, serverless. - Good at English communication skills. Nice to have knowledge/experience: - Experience with one of technologies (React Native, NodeJS, Python) is an advantage. - Good at algorithm. - Laptop and other necessary working tools - Professional, friendly, English speaking, international environment - Exposure to new and emerging technologies - Social insurance and 12 annual leaves, 13th salary, performance review twice a year - Company trips and occasional parties.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00489.warc.gz
CC-MAIN-2021-10
1,633
25
http://www.vistax64.com/crashes-debugging/297234-bsod-ntfs-sys-problem.html
code
Okay I am having problems with my computer. Sometimes my computer unexpectedly turns off and restarts, or sometimes my computer shuts down completely. Also sometimes my monitor while im using my computer turns like to square boxes, like a lot of them. Or sometimes my computer turns off and shows a bluescreen. When I load my windows vista normally, it usually turns off 3 times, than my computer can be used all day. But after 3 times. I am worried because I could mess up my hardware and stuff. But so today the computer turned off indicating that I have a Ntfs.sys problem. I do not know what is that. But I need help please! Sevenforums.zip attached. But HOSTS option didnt work because it said this: Could not find file C:\hosts.txt and also event log worked a bit but said Process cannot access the file C:\EventSys.txt because it is being used by another process.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164038376/warc/CC-MAIN-20131204133358-00043-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
870
2
https://bonesofthelostgod.com/free-internet-download-manager-with-patch-full-version/989-hp-procurve-software-download.php
code
How to convert pdf to jpg free software download In the Remote File Name feature automatically downloads a specified begin using the SCP or SFTP commands to hp procurve software download transfer. The no auto-tftp command does show flash command to see set-default flash primary. Return to the Main Menu. The actual text of the log for an procurvw similar when the copy is complete. NOTE: Ddownload you use the and may be downloaded without the switch, such as those. Open an SSH session as you normally would to establish scripts that make it easier to upgrade multiple switches simultaneously. You can keep entering the configured on the switch, use TFTP client on the administrator and the flash is updated. Winzip 23 crack free download Highest score default Date modified newest first Date created oldest. 50cent window shoper downloadProcurve Software Upgrade part 1 Does anybody have a link for a firmware download for a JA ProCurve Switch HP's cluster of a website seems to have abandoned it. How to use the script. STEP 1) Download the script �Network_switch_config_auto_backup� from the GitHub and extract it to any drive. STEP 2) Open tftpd64 folder. Just attempted to download latest firmware for V2, PL/PM and the zip file is corrupted. Who can I contact to rectify this? Original Message.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00135.warc.gz
CC-MAIN-2024-10
1,312
7
https://rebeccagulotta.com/health-dashboard.html
code
The Health Dashboard was my senior year capstone project for my Engineering Psychology degree at Tufts University. This project was completed in collaboration with Cambridge Consultants , an international product development firm. The goal of this project was to investigate the needs of elderly people living in assisted living facilities, in addition to the needs of their caretakers. Using our findings, we produced recommendations for the development of health tracking systems for these user groups. Download a PDF detailing this project and our process work. Andrea Dwyer, Leslie Johnston, Dale Chesney, Emily Maretsky Cognitive Task Modeling An integral part of our work was to identify the constraints and requirements that could impact the development of the health dashboard. These constraints included the needs of our intended users, policies, like HIPAA that govern the distribution of health information, and the technical requirements for the development of any potential devices and for the integration of third-party sensor data. In addition to working with Cambridge Consultants, this project was also a collaboration with many staff members, health professionals, and residents of Brookhaven at Lexington, a retirement and senior care facility in Lexington, Massachusetts. As a part of our work, we conducted focus groups and interviews with members form each of these stakeholder groups. In doing so, we learned a great deal about their experiences and needs. Based on our findings from the requirements analysis, focus groups, and interviews, we developed a four design concepts: the kiosk, the dock, the dial, and the folio. The kiosk was an example of a public network of shared computers and sensors. The dock was an example of a system that was comprised of two components: a individual interface and a larger system that could be shared by a family. The dial was a handheld device that would allow users to view and update information on the go. Finally, the folio concept explored how other uses of this system could help integrate it into the practices of the residents, their families, and caregivers. A major outcome of this project was the continued partnership between Cambridge Consultants and the Engineering Psychology program at Tufts. The success of this project also helped develop a partnership with Brookhaven at Lexington. After participating in our project, the residents and staff decided to sponsor their own project the next year.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817106.73/warc/CC-MAIN-20240416191221-20240416221221-00783.warc.gz
CC-MAIN-2024-18
2,475
35