query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
17
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
ae1e3055df480fec31e5f75613ea205c
|
When I sell an OTC stock, do I have to check the volume of my sale in order to avoid an NSCC illiquid charge?
|
[
{
"docid": "bc29f3df7b49d4faef1a5644c2244382",
"text": "It's not enough just to check if your order doesn't exceed 10% of the 20 day average volume. I'll quote from my last answer about NSCC illiquid charges: You may still be assessed a fee for trading OTC stocks even if your account doesn't meet the criteria because these restrictions are applied at the level of the clearing firm, not the individual client. This means that if other investors with your broker, or even at another broker that happens to use the same clearing firm, purchase more than 5 million shares in an individual OTC stock at the same time, all of your accounts may face fees, even though individually, you don't exceed the limits. The NSCC issues a charge to the clearing firm if in aggregate, their orders exceed the limits, and the clearing firm usually passes these charges on to the broker(s) that placed the orders. Your broker may or may not pass the charges through to you; they may simply charge you significantly higher commissions for trading OTC securities and use those to cover the charges. Since checking how the volume of your orders compares to the average past volume, ask your broker about their policies on trading OTC stocks. They may tell you that you won't face illiquid charges because the higher cost of commissions covers these, or they may give you specifics on how to verify that your orders won't incur such charges. Only your broker can answer this with certainty.",
"title": ""
}
] |
[
{
"docid": "5cd255593318509c2eba3620efead98a",
"text": "You wouldn't fill out a 1099, your employer would or possibly whoever manages the stock account. The 1099-B imported from E-Trade says I had a transaction with sell price ~$4,500. Yes. You sold ~$4500 of stock to pay income taxes. Both the cost basis and the sale price would probably be ~$4500, so no capital gain. This is because you received and sold the stock at the same time. If they waited a little, you could have had a small gain or loss. The remainder of the stock has a cost basis of ~$5500. There are at least two transactions here. In the future you may sell the remaining stock. It has a cost basis of ~$5500. Sale price of course unknown until then. You may break that into different pieces. So you might sell $500 of cost basis for $1000 with a ~$500 capital gain. Then later sell the remainder for $15,000 for a capital gain of ~$10,000.",
"title": ""
},
{
"docid": "ca3f7736aee95448f9e98da943e33d6b",
"text": "Yes there will be enough liquidity to sell your position barring some sort of Flash Crash anomaly. Volume generally rises on the day of expiration to increase this liquidity. Don't forget that there are many investment strategies--buying to cover a short position is closing out a trade similar to your case.",
"title": ""
},
{
"docid": "f4ca061d1169a2f105fa24f5d250c2d5",
"text": "Any time a large order it placed for Buy, the sell side starts increasing as the demand of Buy has gone up. [Vice Versa is also true]. Once this orders gets fulfilled, the demand drops and hence the Sell price should also lower. Depending on how much was the demand / supply without your order, the price fluctuation would vary. For examply if before your order, for this particular share the normal volume is around 100's of shares then you order would spike things up quite a bit. However if for other share the normal volume is around 100000's then your order would not have much impact.",
"title": ""
},
{
"docid": "2348440127403f34ce321c38c6318907",
"text": "What is essential is that company you are selling is transparent enough. Because it will provide additional liquidity to market. When I decide to sell, I drop all volume once at a time. Liquidation price will be somewhat worse then usual. But being out of position will save you nerves for future thinking where to step in again. Cold head is best you can afford in such scenario. In very large crashes, there could be large liquidity holes. But if you are on upper side of sigmoid, you will be profiting from selling before that holes appear. Problem is, nobody could predict if market is on upper-fall, mid-fall or down-fall at any time.",
"title": ""
},
{
"docid": "7618f539a831ff550e87f916c911b7e6",
"text": "\"My answer isn't a full one, but that's because I think the answer depends on, at minimum, the country your broker is in, the type of order you place (limit, market, algo, etc.,) and the size of your order. For example, I can tell from watching live rates on regular lot limit orders I place with my UK-based broker that they hold limit orders internally until they see a crossing rate on the exchange my requested stock is trading on, then they submit a limit order to that exchange. I only get filled from that one exchange and this happens noticeably after I see my limit price print, and my fills are always better than my limit price. Whereas with my US-based broker, I can see my regular lotsize limit order in the order book (depth of book data) prior to any fills. I will routinely be notified of a fill before I see the limit price print. And my fills come from any number of US exchanges (NYSE, ARCA, BATS, etc.) even for the same stock. I should point out that the \"\"NBBO\"\" rule in the US, under SEC regulation NMS, probably causes more complications in handling of market and limit orders than you're likely to find in most countries.\"",
"title": ""
},
{
"docid": "69c90279a1829fd8ce58e09cb7fd2a79",
"text": "No. If you didn't specify LIFO on account or sell by specifying the shares you wish sold, then the brokers method applies. From Publication 551 Identifying stock or bonds sold. If you can adequately identify the shares of stock or the bonds you sold, their basis is the cost or other basis of the particular shares of stock or bonds. If you buy and sell securities at various times in varying quantities and you cannot adequately identify the shares you sell, the basis of the securities you sell is the basis of the securities you acquired first. For more information about identifying securities you sell, see Stocks and Bonds under Basis of Investment Property in chapter 4 of Pub. 550. The trick is to identify the stock lot prior to sale.",
"title": ""
},
{
"docid": "efdd180becfba8054bd6540931d916d8",
"text": "\"Volume is really only valuable when compared to some other volume, either from a historical value, or from some other stock. The article you linked to doesn't provide specific numbers for you to evaluate whether volume is high or low. Many people simply look at the charts and use a gut feel for whether a day's volume is \"\"high\"\" or \"\"low\"\" in their estimation. Typically, if a day's volume is not significantly taller than the usual volume, you wouldn't call it high. The same goes for low volume. If you want a more quantitative approach, a simple approach would be to use the normal distribution statistics: Calculate the mean volume and the standard deviation. Anything outside of 1.5 to 2.0 standard deviations (either high or low) could be significant in your analysis. You'll need to pick your own numbers (1.5 or 2.0 are just numbers I pulled out of thin air.) It's hard to read anything specific into volume, since for every seller, there's a buyer, and each has their reasons for doing so. The article you link to has some good examples of using volume as a basis for strengthening conclusions drawn using other factors.\"",
"title": ""
},
{
"docid": "5b046169ed068a319df90d5012e5a886",
"text": "How come when I sell stocks, the brokerage won't let me cash out for three days, telling me the SEC requires this clearance period for the transaction to clear, but they can swap shit around in under a second? Be interesting to see what would happen if *every* transaction wasn't cleared until the closing bell.",
"title": ""
},
{
"docid": "6910613137c444c85fb4e476e25872dc",
"text": "I have heard of this, but then the broker is short the shares if they weren't selling them out of inventory, so they still want to accumulate the shares or a hedge before EOD most likely - In that case it may not be the client themselves, but that demand is hitting the market at some point if there isn't sufficient selling volume. Whether or not the broker ends up getting all of them below VWAP is a cost of marketing for them, how they expect to reliably get real size below vwap is my question.",
"title": ""
},
{
"docid": "06238bcde4f209948bd74386f6b222c0",
"text": "\"I've bought ISO stock over they years -- in NYSE traded companies. Every time I've done so, they've done what's called \"\"sell-to-cover\"\". And the gubmint treats the difference between FMV and purchase price as if it's part of your salary. And for me, they've sold some stock extra to pay estimated taxes. So, if I got this right... 20,000 shares at $3 costs you 60,000 to buy them. In my sell-to-cover at 5 scenario: did I get that right? Keeping only 4,000 shares out of 20,000 doesn't feel right. Maybe because I've always sold at a much ratio between strike price and FMV. Note I made some assumptions: first is that the company will sell some of the stock to pay the taxes for you. Second is your marginal tax rate. Before you do anything check these. Is there some reason to exercise immediately? I'd wait, personally.\"",
"title": ""
},
{
"docid": "2a2ff5a170f6667b54c358bf001ac5cf",
"text": "\"The Cash Credit from Unsettled Activity occurs because AGG issued a dividend in the past week. Since you purchased the ETF long enough before the record date (June 5, 2013) for that trade to settle, you qualified for a dividend. The dividend distribution was $0.195217/share for each of your six shares, for a total credit of $1.17 = 6 * 0.195217. For any ETF, the company's website should tell you when dividends are issued, usually under a section titled \"\"Distributions\"\" or something similar. If you look in your Fidelity account's History page, it should show an entry of \"\"Dividend Received\"\", which confirms that the cash credit is coming from a dividend distribution. You could look up your holdings and see which one(s) recently issued a dividend; in this case, it was AGG.\"",
"title": ""
},
{
"docid": "381a1ce7e502b1f9c4471e7dd0327f12",
"text": "\"This is called a Contingent Order and is set up so if one order is filled (in this case) the other order is cancelled. It's a common desire that one would wish to have a stop-loss in place but also a targeted sell price for their in-the-money sell point. Your broker will tell you all you need to know about how to enter this, if you explain you'd like to place a contingent order. (As Victor noted below, your specific order would be a \"\"One Cancels Other\"\" or \"\"OCO\"\") Great first question, welcome to Money,SE.\"",
"title": ""
},
{
"docid": "3a66a5e43fcafe49252adcf58e4aacba",
"text": "I will assume that you are not asking in the context of high frequency trading, as this is Personal Finance Stack Exchange. It is completely acceptable to trade odd lots for retail brokerage customers. The odd lot description that you provided in your link, from Interactive Brokers is correct. But even in that context, it says, regarding the acceptability of odd lots to stock exchanges: The exception is that odd lots can be routed to NYSE/ARCA/AMEX, but only as part of a basket order or as a market-on-close (MOC) order. Google GOOG is traded on the NASDAQ. Everything on the NASDAQ is electronic, and always has been. You will have no problem selling or buying less than 100 shares of Google. There is also an issue of higher commissions with odd lots: While trading commissions for odd lots may still be higher than for standard lots on a percentage basis, the popularity of online trading platforms and the consequent plunge in brokerage commissions means that it is no longer as difficult or expensive for investors to dispose of odd lots as it used to be in the past. Notice what it says about online trading making it easier, not more difficult, to trade odd lots.",
"title": ""
},
{
"docid": "e9ff81339f4419ca37158c942331a99e",
"text": "\"A market sell order will be filled at the highest current \"\"bid\"\" price. For a reasonably liquid stock, there will be several buy orders in line, and the highest bid must be filled first, so there should a very short time between when you place the order and when it is filled. What could happen is what's called front running. That's when the broker places their own order in front of yours to fulfill the current bid, selling their own stock at the slightly higher price, causing your sale to be filled at a lower price. This is not only unethical but illegal as well. It is not something you should be concerned about with a large broker. You should only place a market order when you don't care about minute differences between the current ask and your execution price, but want to guarantee order execution. If you absolutely have to sell at a minimum price, then a limit order is more appropriate, but you run the risk that your limit will not be reached and your order will not be filled. So the risk is a tradeoff between a guaranteed price and a guaranteed execution.\"",
"title": ""
},
{
"docid": "0964d9db32ade538d1fd0fdb8d764ecf",
"text": "Something really does seem seedy that if I invest $2500, that I'll make above 50k if the stock doubles. Is it really that easy? You only buy or sell on margin. Think of when the stock moves in the opposite direction. You will loose 50k. You probably didn't look into that. Investment will vanish and then you will have debt to repay. Holding for long term in CFD accounts are charged per day. Charges depends on different service providers. CFD isn't and should not be used for long term. It is primarily for trading in the short term, maybe a week at the maximum. Have a look at the wikipedia entry and educate yourself.",
"title": ""
}
] |
fiqa
|
c004e2d2f3cb53f70e3ff4d70e6638b0
|
Are SPDR funds good for beginners?
|
[
{
"docid": "03afa29a7bfd96bf54223f0adb7e71a8",
"text": "No, SPDR ETFs are not a good fit for a novice investor with a low level of financial literacy. In fact, there is no investment that is safe for an absolute beginner, not even a savings account. (An absolute beginner could easily overdraw his savings account, leading to fees and collections.) I would say that an investment becomes a good fit for an investor as soon as said investor understands how the investment works. A savings account at a bank or credit union is fairly easy to understand and is therefore a suitable place to hold money after a few hours to a day of research. (Even after 0 hours of research, however, a savings account is still better than a sock drawer.) Money market accounts (through a bank), certificates of deposit (through a bank), and money market mutual funds (through a mutual fund provider) are probably the next easiest thing to understand. This could take a few hours to a few weeks of research depending on the learner. Equities, corporate bonds, and government bonds are another step up in complexity, and could take weeks or months of schooling to understand well enough to try. Equity or bond mutual funds -- or the ETF versions of those, which is what you asked about -- are another level after that. Also important to understand along the way are the financial institutions and market infrastructure that exist to provide these products: banks, credit unions, public corporations, brokerages, stock exchanges, bond exchanges, mutual fund providers, ETF providers, etc.",
"title": ""
}
] |
[
{
"docid": "1034f141e13d0ab627501a394187997c",
"text": "You can look the Vanguard funds up on their website and view a risk factor provided by Vanguard on a scale of 1 to 5. Short term bond funds tend to get their lowest risk factor, long term bond funds and blended investments go up to about 3, some stock mutual funds are 4 and some are 5. Note that in 2008 Swenson himself had slightly different target percentages out here that break out the international stocks into emerging versus developed markets. So the average risk of this portfolio is 3.65 out of 5. My guess would be that a typical twenty-something who expects to retire no earlier than 60 could take more risk, but I don't know your personal goals or circumstances. If you are looking to maximize return for a level of risk, look into Modern Portfolio Theory and the work of economist Harry Markowitz, who did extensive work on the topic of maximizing the return given a set risk tolerance. More info on my question here. This question provides some great book resources for learning as well. You can also check out a great comparison and contrast of different portfolio allocations here.",
"title": ""
},
{
"docid": "3c0b89345b97cedbae31d67280424bad",
"text": "Your question is actually quite broad, so will try to split it into it's key parts: Yes, standard bank ISAs pay very poor rates of interest at the moment. They are however basically risk free and should track inflation. Any investment in the 6-7% return range at the moment will be linked to stock. Stock always carries large risks (~50% swings in capital are pretty standard in the short run. In the long run it generally beats every other asset class by miles). If you can’t handle those types of short terms swings, you shouldn’t get involved. If you do want to invest in stock, there is a hefty ignorance tax waiting at every corner in terms of how brokers construct their fees. In a nutshell, there is a different best value broker in the UK for virtually every band of capital, and they make their money through people signing up when they are in range x, and not moving their money when they reach band y; or just having a large marketing budget and screwing you from the start (Nutmeg at ~1% a year is def in this category). There isn't much of an obvious way around this if you are adamant you don't want to learn about it - the way the market is constructed is just a total predatory minefield for the complete novice. There are middle ground style investments between the two extremes you are looking at: bonds, bond funds and mixes of bonds and small amounts of stock (such as the Vanguard income or Conservative Growth funds outlined here), can return more than savings accounts with less risk than stocks, but again its a very diverse field that's hard to give specific advice about without knowing more about what your risk tolerance, timelines and aims are. If you do go down this (or the pure stock fund) route, it will need to be purchased via a broker in an ISA wrapper. The broker charges a platform fee, the fund charges a fund fee. In both cases you want these as low as possible. The Telegraph has a good heat map for the best value ISA platform providers by capital range here. Fund fees are always in the key investor document (KIID), under 'ongoing charges'.",
"title": ""
},
{
"docid": "d52e4bc33d7fbd7bb988121784a3e0fc",
"text": "It depends what you want to do with them. If you are just simply going to drip-feed into pre-identified shares or ETFs every few months at the market price, you don't need fancy features: just go with whoever is cheaper. You can always open another account later if you need something more exotic. Some brokerages are associated with banks and that may give you a benefit if you already deal with that bank: faster transfers (anz-etrade), or zero brokerage (westpac brokerage on westpac structured products.) There's normally no account fee so you can shop around.",
"title": ""
},
{
"docid": "2c44d62e3ce8df5859c2428ecb00f5a3",
"text": "Note that many funds just track indexes. In that case, you essentially don't have to worry about the fund manager making bad decisions. In general, the statistics are very clear that you want to avoid any actively managed fund. There are many funds that are good all-in-one investments. If you are in Canada, for example, Canadian Couch Potato recommends the Tangerine Investment Funds. The fees are a little high, but if you don't have a huge investment, one of these funds would be a good choice and appropriate for 100% of your investment. If you have a larger investment, to the point that Tangerine's MER scares you a little, you still may well look at a three or four fund (or ETF) portfolio. You may choose to use an actively-managed fund even though you know there's virtually no chance it'll beat a fund that just tracks an index, long-term. In that case, I'd recommend devoting only a small portion of your portfolio to this fund. Many people suggest speculating with no more than 10% of your combined investment. Note that other people are more positive on actively-managed funds.",
"title": ""
},
{
"docid": "a519077e8b48ef99b0d20e77a981deb0",
"text": "Thank you fgunthar. I was not aware of ILWs, but I agree - this is also the closest thing I've found. As for starting a fund, I'm unfortunately nowhere near that point. But, my curiosity seems to inevitably lead me to obscure areas like ILWs.",
"title": ""
},
{
"docid": "189074bc66e38dfa800eb176139e72b2",
"text": "\"I've been down the consolidation route too (of a handful of DC pensions; the DB ones I've not touched, and you would indeed need advice to move those around). What you should be comparing against is: what's the cheapest possible thing you could be doing? Monevators' online platform list will give you an idea of SIPP costs (if your pot is big enough and you're a buy-and-hold person, ATS' flat-fee model means costs can become arbitrarily close to zero percent), and if you're happy to be invested in something like Vanguard Lifestrategy, Target Retirement or vanilla index trackers then charges on those will be something like 0.1%-0.4%. Savings of 0.5-1.0% per year add up over pension saving timescales, but only you can decide whether whatever extra the adviser is offering vs. a more DIY approach is worth it for you. Are you absolutely sure that 0.75% pa fee isn't on top of whatever charges are built into the funds he'll invest you in? For the £1000 fee, advisers claim to have high costs per customer because of \"\"regulatory burdens\"\"; this is why there's talk of an \"\"advice gap\"\" these days: if you only have a small sum to invest, the fixed costs of advice become intolerable. IMHO, nutmeg are still quite expensive for what they offer too (although still probably cheaper than any \"\"advised\"\" route).\"",
"title": ""
},
{
"docid": "8252f119c4f67b1a6d985f5543019804",
"text": "The numbers you have quoted don't add up. For Rs 30,000 / month is 3,60,000 a year. The tax should be around 11,000 again this will be reduced by the contributions to PF. You have indicated a tax deductions of 18,000. There are multiple ways to save taxes. Since you are beginner, investments into section 80C should give you required tax benefits. Please read this article in Economic Times",
"title": ""
},
{
"docid": "12b393f48f29a67fb2145c2685cdab24",
"text": "\"Some of the other answers recommended peer-to-peer lending and property markets. I would not invest in either of these. Firstly, peer-to-peer lending is not a traditional investment and we may not have enough historical data for the risk-to-return ratio. Secondly, property investments have a great risk unless you diversify, which requires a huge portfolio. Crowd-funding for one property is not a traditional investment, and may have drawbacks. For example, what if you disagree with other crowd-funders about the required repairs for the property? If you invest in the property market, I recommend a well-diversified fund that owns many properties. Beware of high debt leverage used to enhance returns (and, at the same time, risk) and high fees when selecting a fund. However, traditionally it has been a better choice to invest in stocks than to invest in property market. Beware of anyone who says that the property market is \"\"too good to not get into\"\" without specifying which part of the world is meant. Note also that many companies invest in properties, so if you invest only in a well-diversified stock index fund, you may already have property investments in your portfolio! However, in your case I would keep the money in risk-free assets, i.e. bank savings or a genuine low-cost money market fund (i.e. one that doesn't invest in corporate debt or in variable-rate loans which have short duration but long maturity). The reason is that you're going to be unemployed soon, and thus, you may need the money soon. If you have an investment horizon of, say, 10 years, then I would throw stocks into the mix, and if you're saving for retirement, then I would go all in to stocks. In the part of the world where I live in, money market funds generally have better return than bank savings, and better diversification too. However, your 2.8% interest sounds rather high (the money market fund I have in the past invested in currently yields at 0.02%, but then again I live in the eurozone), so be sure to get estimates for the yields of different risk-free assets. So, my advice for investing is simple: risk-free assets for short time horizon, a mixture of stocks and risk-free assets for medium time horizon, and only stocks for long time horizon. In any case, you need a small emergency fund, too, which you should consider a thing separate from your investments. My emergency fund is 20 000 EUR. Your 50 000 AUD is bit more than 30 000 EUR, so you don't really have that much money to invest, only a bit more than a reasonably sized emergency fund. But then again, I live in rental property, so my expenses are probably higher than yours. If you can foresee a very long time horizon for part of your investment, you could perhaps invest 50% of your money to stocks (preference being a geographically diversified index fund or a number of index funds), but I wouldn't invest more because of the need for an emergency fund.\"",
"title": ""
},
{
"docid": "5790337078c1c0fd24948a1f5458e974",
"text": "Your idea is a good one, but, as usual, the devil is in the details, and implementation might not be as easy as you think. The comments on the question have pointed out your Steps 2 and 4 are not necessarily the best way of doing things, and that perhaps keeping the principal amount invested in the same fund instead of taking it all out and re-investing it in a similar, but different, fund might be better. The other points for you to consider are as follows. How do you identify which of the thousands of conventional mutual funds and ETFs is the average-risk / high-gain mutual fund into which you will place your initial investment? Broadly speaking, most actively managed mutual fund with average risk are likely to give you less-than-average gains over long periods of time. The unfortunate truth, to which many pay only Lipper service, is that X% of actively managed mutual funds in a specific category failed to beat the average gain of all funds in that category, or the corresponding index, e.g. S&P 500 Index for large-stock mutual funds, over the past N years, where X is generally between 70 and 100, and N is 5, 10, 15 etc. Indeed, one of the arguments in favor of investing in a very low-cost index fund is that you are effectively guaranteed the average gain (or loss :-(, don't forget the possibility of loss). This, of course, is also the argument used against investing in index funds. Why invest in boring index funds and settle for average gains (at essentially no risk of not getting the average performance: average performance is close to guaranteed) when you can get much more out of your investments by investing in a fund that is among the (100-X)% funds that had better than average returns? The difficulty is that which funds are X-rated and which non-X-rated (i.e. rated G = good or PG = pretty good), is known only in hindsight whereas what you need is foresight. As everyone will tell you, past performance does not guarantee future results. As someone (John Bogle?) said, when you invest in a mutual fund, you are in the position of a rower in rowboat: you can see where you have been but not where you are going. In summary, implementation of your strategy needs a good crystal ball to look into the future. There is no such things as a guaranteed bond fund. They also have risks though not necessarily the same as in a stock mutual fund. You need to have a Plan B in mind in case your chosen mutual fund takes a longer time than expected to return the 10% gain that you want to use to trigger profit-taking and investment of the gain into a low-risk bond fund, and also maybe a Plan C in case the vagaries of the market cause your chosen mutual fund to have negative return for some time. What is the exit strategy?",
"title": ""
},
{
"docid": "bffeaf61787f6b4ab0868de12b79540f",
"text": "\"I got started by reading the following two books: You could probably get by with just the first of those two. I haven't been a big fan of the \"\"for dummies\"\" series in the past, but I found both of these were quite good, particularly for people who have little understanding of investing. I also rather like the site, Canadian Couch Potato. That has a wealth of information on passive investing using mutual funds and ETFs. It's a good next step after reading one or the other of the books above. In your specific case, you are investing for the fairly short term and your tolerance for risk seems to be quite low. Gold is a high-risk investment, and in my opinion is ill-suited to your investment goals. I'd say you are looking at a money market account (very low risk, low return) such as e.g. the TD Canadian Money Market fund (TDB164). You may also want to take a look at e.g. the TD Canadian Bond Index (TDB909) which is only slightly higher risk. However, for someone just starting out and without a whack of knowledge, I rather like pointing people at the ING Direct Streetwise Funds. They offer three options, balancing risk vs reward. You can fill in their online fund selector and it'll point you in the right direction. You can pay less by buying individual stock and bond funds through your bank (following e.g. one of the Canadian Couch Potato's model portfolios), but ING Direct makes things nice and simple, and is a good option for people who don't care to spend a lot of time on this. Note that I am not a financial adviser, and I have only a limited understanding of your needs. You may want to consult one, though you'll want to be careful when doing so to avoid just talking to a salesperson. Also, note that I am biased toward passive index investing. Other people may recommend that you invest in gold or real estate or specific stocks. I think that's a bad idea and believe I have the science to back this up, but I may be wrong.\"",
"title": ""
},
{
"docid": "351fdf0447a27914d72272e67c26e408",
"text": "First: it sounds like you are already making wise choices with your cash surplus. You've looked for ways to keep that growing ahead of inflation and you have made use of tax shelters. So for the rest of this answer I am going to assume you have between 3-6 months expenses already saved up as a “rainy day fund” and you're ready for more sophisticated approaches to growing your funds. To answer this part: Are there any other ways that I can save/ invest that I am not currently doing? Yes, you could look at, for example: 1. Peer to peer These services let you lend to a 'basket' of borrowers and receive a return on your money that is typically higher than what's offered in cash savings accounts. Examples of peer to peer networks are Zopa, Ratesetter and FundingCircle. This involves taking some risks with your money – Zopa's lending section explains the risks. 2. Structured deposits These are a type of cash deposit product where, in return for locking your money away for a time (typically 5 years), you get the opportunity for higher returns e.g. 5% + / year. Your deposit is usually guaranteed under the FSCS (Financial services compensation scheme), however, the returns are dependent on the performance of a stock market index such as the FTSE 100 being higher in x years from now. Also, structured deposits usually require a minimum £3,000 investment. 3. Index funds You mention watching the stock prices of a few companies. I agree with your conclusion – I wouldn't suggest trying to choose individual stocks at this stage. Price history is a poor predictor of future performance, and markets can be volatile. To decide if a stock is worth buying you need to understand the fundamentals, be able to assess the current stock price and future outlook, and be comfortable accepting a range of different risks (including currency and geographic risk). If you buy shares in a small number of companies, you are concentrating your risk (especially if they have things in common with each other). Index funds, while they do carry risks, let you pool your money with other investors to buy shares in a 'basket' of stocks to replicate the movement of an index such as the FTSE All Share. The basket-of-stocks approach at least gives you some built-in diversification against the risks of individual stocks. I suggest index funds (as opposed to actively managed funds, where you pay a management fee to have your investments chosen by a professional who tries to beat the market) because they are low cost and easier to understand. An example of a very low cost index fund is this FTSE All Share tracker from Aberdeen, on the Hargreaves Lansdown platform: http://www.hl.co.uk/funds/fund-discounts,-prices--and--factsheets/search-results/a/aberdeen-foundation-growth-accumulation General principle on investing in stock market based index funds: You should always invest with a 5+ year time horizon. This is because prices can move up and down for reasons beyond your anticipation or control (volatility). Time can smooth out volatility; generally, the longer the time period, the greater your likelihood of achieving a positive return. I hope this answer so far helps takes into account the excess funds. So… to answer the second part of your question: Or would it be best to start using any excess funds […] to pay off my student loan quicker? Your student loan is currently costing you 0.9% interest per annum. At this rate it's lower than the last 10 years average inflation. One argument: if you repay your student loan this is effectively a 0.9% guaranteed return on every pound repaid – This is the equivalent of 1.125% on a cash savings account if you're paying basic rate tax on the interest. An opposing argument: 0.9% is lower than the last 10 years' average inflation in the UK. There are so many advantages to making a start with growing your money for the long term, due to the effects of compound returns, that you might choose to defer your loan repayments for a while and focus on building up some investments that stand a chance to beat inflation in the long term.",
"title": ""
},
{
"docid": "3f665baca9e2e42ab39bf00e9fb75c8b",
"text": "Bond aren't necessarily any safer than the stock market. Ultimately, there is no such thing as a low risk mutual fund. You want something that will allow you get at your money relatively quickly. In other words, CDs (since you you can pick a definite time period for your money to be tied up), money market account or just a plain old savings account. Basically, you want to match inflation and have easy access to the money. Any other returns on top of that are gravy, but don't fret too much about it. See also: Where can I park my rainy-day / emergency fund? Savings accounts don’t generate much interest. Where should I park my rainy-day / emergency fund?",
"title": ""
},
{
"docid": "05f4925f5d8fd3d6ddd0d008ab149723",
"text": "The partition is more or less ok, the specific products are questionable. Partition. It's usually advised to keep 2-3 monthly income liquid. In your case, 40-45 kEUR is ca. 24-27 kEUR netto, i.e. 2000-2250 a month, thus, the range is 4-7 kEUR, as you are strongly risk-averse then 7k is still ok. Then they propose you to invest 60% in low-risk, but illiquid and 15% in middle or high risk which is also ok. However, it doesn't have to be real estate, but could be. Specifics. Be aware that a lot (most?) of the banks (including local banks, they are, however, less aggressive) often sell the products that promise high commissions to them (often with a part flowing directly to your client advisor). Especially now, when the interest rates are low, they stand under extra pressure. You should rather switch to passively managed funds with low fees. If you stick up to the actively managed funds with their fees, you should choose them yourself.",
"title": ""
},
{
"docid": "21644dc58ac157d153254c1422b6763b",
"text": "I personally like Schwab. Great service, low fees, wide variety of fund are available at no fee. TD Ameritrade is good too.",
"title": ""
},
{
"docid": "8abab3a7c58f602a64ee42553c53c2d9",
"text": "\"I don't think you have your head in the right space - you seem to be thinking of these lifecycle funds like they're an annuity or a pension, but they're not. They're an investment. Specifically, they're a mutual fund that will invest in a collection of other mutual funds, which in turn invest in stock and bonds. Stocks go up, and stocks go down. Bonds go up, and bonds go down. How much you'll have in this fund next year is unknowable, much less 32 years from now. What you can know, is that saving regularly over the next 32 years and investing it in a reasonable, and diversified way in a tax sheltered account like that Roth will mean you have a nice chunk of change sitting there when you retire. The lifecycle funds exist to help you with that \"\"reasonable\"\" and \"\"diversified\"\" bit.They're meant to be one stop shopping for a retirement portfolio. They put your money into a diversified portfolio, then \"\"age\"\" the portfolio allocations over time to make it go from a high risk, (potentially) high reward allocation now to a lower risk, lower reward portfolio as you approach retirement. The idea is is that you want to shoot for making lots of money now, but when you're older, you want to focus more on keeping the money you have. Incidentally, kudos for getting into seriously saving for retirement when you're young. One of the biggest positive effects you can have on how much you retire with is simply time. The more time your money can sit there, the better. At 26, if you're putting away 10 percent into a Roth, you're doing just fine. If that 5k is more than 10 percent, you'll do better than fine. (That's a rule of thumb, but it's based on a lot of things I've read where people have gamed out various scenarios, as well as my own, cruder calculations I've done in the past)\"",
"title": ""
}
] |
fiqa
|
f35fb93160b28a7119a109c20e88b3ac
|
How to make use of EUR/USD fluctuations in my specific case?
|
[
{
"docid": "eda543db876b5d150a730688db867bef",
"text": "This is called currency speculation, and it's one of the more risky forms of investing. Unless you have a crystal ball that tells you the Euro will move up (or down) relative to the Dollar, it's purely speculation, even if it seems like it's on an upswing. You have to remember that the people who are speculating (professionally) on currency are the reason that the amount changed, and it's because something caused them to believe the correct value is the current one - not another value in one direction or the other. This is not to say people don't make money on currency speculation; but unless you're a professional investor, who has a very good understanding of why currencies move one way or the other, or know someone who is (and gives free advice!), it's not a particularly good idea to engage in it - while stock trading is typically win-win, currency speculation is always zero-sum. That said, you could hedge your funds at this point (or any other) by keeping some money in both accounts - that is often safer than having all in one or the other, as you will tend to break even when one falls against the other, and not suffer significant losses if one or the other has a major downturn.",
"title": ""
},
{
"docid": "60f3356747247bc63b7afea2a1b05324",
"text": "Remember that converting from EU to USD and the other way around always costs you money, at least 0.5% per conversion. Additionally, savings accounts in EU and USA have different yields, you may want to compare which country offers you the best yields and move your money to the highest yielding account.",
"title": ""
},
{
"docid": "3aeef25d59c01d9382647746f9d7cada",
"text": "\"I would make this a comment but I am not allowed apparently. Unless your continent blows up, you'll never lost all your money. Google \"\"EUR USD\"\" if you want news stories or graphs on this topic. If you're rooting for your 10k USD (but not your neighbors), you want that graph to trend downward.\"",
"title": ""
}
] |
[
{
"docid": "db7a27bf0afb30d12a004f760578f6a8",
"text": "\"is there anything I can do now to protect this currency advantage from future volatility? Generally not much. There are Fx hedges available, however these are for specialist like FI's and Large Corporates, traders. I've considered simply moving my funds to an Australian bank to \"\"lock-in\"\" the current rate, but I worry that this will put me at risk of a substantial loss (due to exchange rates, transfer fees, etc) when I move my funds back into the US in 6 months. If you know for sure you are going to spend 6 months in Australia. It would be wise to money certain amount of money that you need. So this way, there is no need to move back funds from Australia to US. Again whether this will be beneficial or not is speculative and to an extent can't be predicted.\"",
"title": ""
},
{
"docid": "c4928107daac55e5455a1f8a674e89ce",
"text": "Use other currencies, if available. I'm not familiar with the banking system in South Africa; if they haven't placed any currency freezes or restrictions, you might want to do this sooner than later. In full crises, like Russian and Ukraine, once the crisis worsened, they started limiting purchases of foreign currencies. PayPal might allow currency swaps (it implies that it does at the bottom of this page); if not, I know Uphold does. Short the currency Brokerage in the US allow us to short the US Dollar. If banks allow you to short the ZAR, you can always use that for protection. I looked at the interest rates in the ZAR to see how the central bank is offsetting this currency crisis - WOW - I'd be running, not walking toward the nearest exit. A USA analogy during the late 70s/early 80s would be Paul Volcker holding interest rates at 2.5%, thinking that would contain 10% inflation. Bitcoin Comes with significant risks itself, but if you use it as a temporary medium of exchange for swaps - like Uphold or with some bitcoin exchanges like BTC-e - you can get other currencies by converting to bitcoin then swapping for other assets. Bitcoin's strength is remitting and swapping; holding on to it is high risk. Commodities I think these are higher risk right now as part of the ZAR's problem is that it's heavily reliant on commodities. I looked at your stock market to see how well it's done, and I also see that it's done poorly too and I think the commodity bloodbath has something to do with that. If you know of any commodity that can stay stable during uncertainty, like food that doesn't expire, you can at least buy without worrying about costs rising in the future. I always joke that if hyperinflation happened in the United States, everyone would wish they lived in Utah.",
"title": ""
},
{
"docid": "b9584a6f6554b2d2367ec417532961f0",
"text": "e.g. a European company has to pay 1 million USD exactly one year from now While that is theoretically possible, that is not a very common case. Mostly likely if they had to make a 1 million USD payment a year from now and they had the cash on hand they would be able to just make the payment today. A more common scenario for currency forwards is for investment hedging. Say that European company wants to buy into a mutual fund of some sort, say FUSEX. That is a USD based mutual fund. You can't buy into it directly with Euros. So if the company wants to buy into the fund they would need to convert their Euros to to USD. But now they have an extra risk parameter. They are not just exposed to the fluctuations of the fund, they are also exposed to the fluctuations of the currency market. Perhaps that fund will make a killing, but the exchange rate will tank and they will lose all their gains. By creating a forward to hedge their currency exposure risk they do not face this risk (flip side: if the exchange rate rises in a favorable rate they also don't get that benefit, unless they use an FX Option, but that is generally more expensive and complicated).",
"title": ""
},
{
"docid": "15404acf93f7162857cc0bc696e09b11",
"text": "\"There are firms that let you do this. I believe that Saxo Bank is one such firm (note that I'm not endorsing the company at all, and have no experience with it) Keep in mind that the reason that these currencies are \"\"exotic\"\" is because the markets for trading are small. Small markets are generally really bad for retail/non-professional investors. (Also note: I'm not trying to insult Brazil or Thailand, which are major economies. In this context, I'm specifically concerned with currency trading volume.)\"",
"title": ""
},
{
"docid": "1045b2db53cd0bc42ef37ebd4f8aad91",
"text": "About the inflation or low interest rates in both the countries is out of the equation especially since rupee is always a low currency compared to Euro. You cannot make profit in Euros using rupee or vice-versa. It all depends on where you want to use the money, in India or Europe? If you want use the money from fixed deposit in Europe, then buy fixed deposit in euros from Europe. If you want to use the money in India, then convert the euros and buy FD in India.",
"title": ""
},
{
"docid": "057c8941ff4fd43be95685dd3b8b1374",
"text": "I'm sorry I guess what i meant to say was, what's the downside here? Why isn't everyone doing this, what am i missing? Someone clarified that i'm completely exposed to FX risk if I bring it back. What if I am IN australia, how would I do this, short USD's?",
"title": ""
},
{
"docid": "ffed5c7119959ba1d41c3d6541485cca",
"text": "You could buy some call options on the USD/INR. That way if the dollar goes up, you'll make the difference, and if the dollar goes down, then you'll lose the premium you paid. I found some details on USD/INR options here Looks like the furthest out you can go is 3 months. Note they're european style options, so they can only be exercised on the expiration date (as opposed to american style, which can be exercised at any time up to the expiration date). Alternatively, you could buy into some futures contracts for the USD/INR. Those go out to 12 months. With futures if the dollar goes up, you get the difference, if the dollar goes down, you pay the difference. I'd say if you were going to do something like this, stick with the options, since the most you could lose there is the premium you put up for the option contracts. With futures, if it suddenly moved against you you could find yourself with huge losses. Note that playing in the futures and options markets are an easy way to get burned -- it's not for the faint of heart.",
"title": ""
},
{
"docid": "71973b471b6779c847e78549ccae7fb6",
"text": "Rather than screwing around with foreign currencies, hop over to Germany and open an account at the first branch of Deutsche or Commerzbank you see. If the euro really does disintegrate, you want to have your money in the strongest country of the lot. Edit: and what I meant to say is that if the euro implodes, you'll end up with deutschmarks, which, unlike the new IEP, will *not* need to devalue. (And in the meantime, you've still got euros, so you have no FX risk.)",
"title": ""
},
{
"docid": "83d9ae6ad60870a09c431cbe4c9498a1",
"text": "\"I suggest that you're really asking questions surrounding three topics: (1) what allocation hedges your risks but also allows for upside? (2) How do you time your purchases so you're not getting hammered by exchange rates? (3) How do you know if you're doing ok? Allocations Your questions concerning allocation are really \"\"what if\"\" questions, as DoubleVu points out. Only you can really answer those. I would suggest building an excel sheet and thinking through the scenarios of at least 3 what-ifs. A) What if you keep your current allocations and anything in local currency gets cut in half in value? Could you live with that? B) What if you allocate more to \"\"stable economies\"\" and your economy recovers... so stable items grow at 5% per year, but your local investments grow 50% for the next 3 years? Could you live with that missed opportunity? C) What if you allocate more to \"\"stable economies\"\" and they grow at 5%... while SA continues a gradual slide? Remember that slow or flat growth in a stable currency is the same as higher returns in a declining currency. I would trust your own insights as a local, but I would recommend thinking more about how this plays out for your current investments. Timing You bring up concerns about \"\"timing\"\" of buying expensive foreign currencies... you can't time the market. If you knew how to do this with forex trading, you wouldn't be here :). Read up on dollar cost averaging. For most people, and most companies with international exposure, it may not beat the market in the short term, but it nets out positive in the long term. Rebalancing For you there will be two questions to ask regularly: is the allocation still correct as political and international issues play out? Have any returns or losses thrown your planned allocation out of alignment? Put your investment goals in writing, and revisit it at least once a year to evaluate whether any adjustments would be wise to make. And of course, I am not a registered financial professional, especially not in SA, so I obviously recommend taking what I say with a large dose of salt.\"",
"title": ""
},
{
"docid": "cb4539d14a460c05bbedaebb6a7be667",
"text": "Trying to engage in arbitrage with the metal in nickels (which was actually worth more than a nickel already, last I checked) is cute but illegal, and would be more effective at an industrial scale anyway (I don't think you could make it cost-effective at an individual level). There are more effective inflation hedges than nickels and booze. Some of them even earn you interest. You could at least consider a more traditional commodities play - it's certainly a popular strategy these days. A lot of people shoot for gold, as it's a traditional hedge in a crisis, but there are concerns that particular market is overheated, so you might consider alternatives to that. Normal equities (i.e. the stock market) usually work out okay in an inflationary environment, and can earn you a return as they're doing so.... and it's not like commodities aren't volatile and subject to the whims of the world economy too. TIPs (inflation-indexed Treasury bonds) are another option with less risk, but also a weaker return (and still have interest rate risks involved, since those aren't directly tied to inflation either).",
"title": ""
},
{
"docid": "6207d6f6b6c4c84fc02c0153c0fc89f6",
"text": "I would strongly recommend investing in assets and commodities. I personally believe fiat money is losing its value because of a rising inflation and the price of oil. The collapse of the euro should considerably affect the US currency and shake up other regions of the world in forex markets. In my opinion, safest investment these days are hard assets and commodities. Real estate, land, gold, silver(my favorite) and food could provide some lucrative benefits. GL mate!",
"title": ""
},
{
"docid": "889b617c42eb36f14a26d3441f38a8f3",
"text": "Have you tried calling a Forex broker and asking them if you can take delivery on currency? Their spreads are likely to be much lower than banks/ATMs.",
"title": ""
},
{
"docid": "898ce44c82eb87251d3e0d36b6907dda",
"text": "You could go further and do a carry trade by borrowing EUR at 2% and depositing INR at 10%. All the notes above apply, and see the link there.",
"title": ""
},
{
"docid": "1cfa763eb7329a1cea601b1c91dda9c7",
"text": "\"In short, yes. By \"\"forward selling\"\", you enter into a futures contract by which you agree to trade Euros for dollars (US or Singapore) at a set rate agreed to by both parties, at some future time. You are basically making a bet; you think that the dollar will gain on the Euro and thus you'd pay a higher rate on the spot than you've locked in with the future. The other party to the contract is betting against you; he thinks the dollar will weaken, and so the dollars he'll sell you will be worth less than the Euros he gets for them at the agreed rate. Now, in a traditional futures contract, you are obligated to execute it, whether it ends up good or bad for you. You can, to avoid this, buy an \"\"option\"\". By buying the option, you pay the other party to the deal for the right to say \"\"no, thanks\"\". That way, if the dollar weakens and you'd rather pay spot price at time of delivery, you simply let the contract expire un-executed. The tradeoff is that options cost money up-front which is now sunk; whether you exercise the option or not, the other party gets the option price. That basically creates a \"\"point spread\"\"; you \"\"win\"\" if the dollar appreciates against the Euro enough that you still save money even after buying the option, or if the dollar depreciates against the Euro enough that again you still save money after subtracting the option price, while you \"\"lose\"\" if the exchange rates are close enough to what was agreed on that it cost you more to buy the option than you gained by being able to choose to use it.\"",
"title": ""
},
{
"docid": "bcbd96d50a6f159f56b3bc04413bca94",
"text": "\"We're in agreement, I just want retail investors to understand that in most of these types of discussions, the unspoken reality is the retail sector trading the market is *over*. This includes the mutual funds you mentioned, and even most index funds (most are so narrowly focused they lose their relevance for the retail investor). In the retail investment markets I'm familiar with, there are market makers of some sort or another for specified ranges. I'm perfectly fine with no market makers; but retail investors should be told the naked truth as well, and not sold a bunch of come-ons. What upsets me is seeing that just as computers really start to make an orderly market possible (you are right, the classic NYSE specialist structure was outrageously corrupt), regulators turned a blind eye to implementing better controls for retail investors. The financial services industry has to come to terms whether they want AUM from retail or not, and having heard messaging much like yours from other professionals, I've concluded that the industry does *not* want the constraints with accepting those funds, but neither do they want to disabuse retail investors of how tilted the game is against them. Luring them in with deceptively suggestive marketing and then taking money from those naturally ill-prepared for the rigors of the setting is like beating up the Downs' Syndrome kid on the short bus and boasting about it back on the campus about how clever and strong one is. If there was as stringent truth in marketing in financial services as cigarettes, like \"\"this service makes their profit by encouraging the churning of trades\"\", there would be a lot of kvetching from so-called \"\"pros\"\" as well. If all retail financial services were described like \"\"dead cold cow meat\"\" describes \"\"steak\"\", a lot of retail investors would be better off. As it stands today, you'd have to squint mighty hard to see the faintly-inscribed \"\"caveat emptor\"\" on financial services offerings to the retail sector. Note that depending upon the market setting, the definition of retail differs. I'm surprised the herd hasn't been spooked more by the MF Global disaster, for example, and yet there are some surprisingly large accounts detrimentally affected by that incident, which in a conventional equities setting would be considered \"\"pros\"\".\"",
"title": ""
}
] |
fiqa
|
4f97454f084b6430e31ae0a78b02a768
|
How and where can I deposit money to generate future payments / income?
|
[
{
"docid": "eacf7563b0bb74ec08e6a109be9e26ff",
"text": "Reversing your math, I am assuming you have $312K to work with. In that case, I would simply shop around your local banks and/or credit unions and have them compete for your money and you might be quite surprised how much they are willing to pay. A couple of months ago, you would be able to get about 4.25% from Israel Bonds in Canada on 5 years term (the Jubilee product, with minimum investment of $25K). It's a bit lower now, but you should still be able to get very good rates if you shop around tier-2 banks or credit unions (who are more hungry for capital than the well-funded tier-1 banks). Or you could look at preferred shares of a large corporation. They are different from common shares in the sense they are priced according to the payout rate (i.e. people buy it for the dividend). A quick screen from your favorite stock exchange ought to find you a few options. Another option is commercial bonds. You should be able to get that kind of return from investment grade (BBB- and higher) bonds on large corporations these days. I just did a quick glance at MarketWatch's Bond section (http://cxa.marketwatch.com/finra/BondCenter/Default.aspx) and found AAA grade bonds that will yield > 5%. You will need to investigate their underlying fundamentals, coupon rate and etc before investing (second thought, grab a introduction to bonds book from Chapters first). Hope these helps.",
"title": ""
},
{
"docid": "c842e205a840bf448e0a7df3c75c5bbe",
"text": "If you're in the USA and looking to retire in 10 years, pay your Social Security taxes? :P Just kidding. Do a search for Fixed Rate Annuities.",
"title": ""
}
] |
[
{
"docid": "c255f9fe7a02eec2d330e649199f09dc",
"text": "Unfortunately, in this market environment your goal is not very realistic. At the moment real interest rates are negative (and have been for some time). This means if you invest in something that will pay out for sure, you can expect to earn less than you lose through inflation. In other words, if you save your $50K, when you withdraw it in a few years you will be able to buy less with it then than you can now. You can invest in risky securities like stocks or mutual funds. These assets can easily generate 10% per year, but they can (and do) also generate negative returns. This means you can and likely will lose money after investing in them. There's an even better chance that you will make money, but that varies year by year. If you invest in something that expects to make 10% per year (meaning it makes that much on average), it will be extremely risky and many years it will lose money, perhaps a lot of it. That's the way risk is. Are you comfortable taking on large amounts of risk (good chances of losing a lot of your money)? You could make some kind of real investment. $50K is a little small to buy real estate, but you may be able to find something like real estate that can generate income, especially if you use it as a down payment to borrow from the bank. There is risk in being a landlord as well, of course, and a lot of work. But real investments like that are a reasonable alternative to financial markets for some people. Another possibility is to just keep it in your bank account or something else with no risk and take $5000 out per year. It will only last you 10 years that way, but if you are not too young, that will be a significant portion of your life. If you are young, you can work and add to it. Unfortunately, financial markets don't magically make people rich. If you make a lot of money in the market, it's because you took a risk and got lucky. If you make a comfortable amount with no risk, it means you invested in a market environment very different from what we see today. --------- EDIT ------------ To get an idea of what risk free investments (after inflation) earn per year at various horizons see this table at the treasury. At the time of this writing you would have to invest in a security with maturity almost 10 years in order to break even with inflation. Beating it by 10% or even 3% per year with minimal risk is a pipe dream.",
"title": ""
},
{
"docid": "a849b511991ca24f1b68207ffef4b33a",
"text": "How do I direct deposit my paycheck into a high yield financial vehicle, like lottery tickets? And can I roll over my winnings into more lottery tickets? I want to wait until I have a few billion before touching it, maybe in a year or two.",
"title": ""
},
{
"docid": "9b57b79376f59df43a6a51ee2b861ac6",
"text": "A credit card is essentially a contract where they will loan you money in an on demand basis. It is not a contract for you to loan them money. The money that you have overpaid is generally treated as if it is a payment for a charge that you have made that has not been processed yet. The bank can not treat that money as a deposit and thus leverage it make money them selves. You can open an account and get a debit card. This would allow you to accrue interest for your deposit while using your money. But if you find one willing to pay you 25% interest please share with the rest of us :)",
"title": ""
},
{
"docid": "95f8b0a9613586413cfb36902c06e781",
"text": "Genius answer: Don't spend more than you make. Pay off your outstanding debts. Put plenty away towards savings so that you don't need to rely on credit more than necessary. Guaranteed to work every time. Answer more tailored to your question: What you're asking for is not realistic, practical, logical, or reasonable. You're asking banks to take a risk on you, knowing based on your credit history that you're bad at managing debt and funds, solely based on how much cash you happen to have on hand at the moment you ask for credit or a loan or based on your salary which isn't guaranteed (except in cases like professional athletes where long-term contracts are in play). You can qualify for lower rates for mortgages with a larger down-payment, but you're still going to get higher rate offers than someone with good credit. If you plan on having enough cash around that you think banks would consider making you credit worthy, why bother using credit at all and not just pay for things with cash? The reason banks offer credit or low interest on loans is because people have proven themselves to be trustworthy of repaying that debt. Based on the information you have provided, the bank wouldn't consider you trustworthy yet. Even if you have $100,000 in cash, they don't know that you're not just going to spend it tomorrow and not have the ability to repay a long-term loan. You could use that $100,000 to buy something and then use that as collateral, but the banks will still consider you a default risk until you've established a credit history to prove them otherwise.",
"title": ""
},
{
"docid": "4767150d12ae946f266ade3beae6a7b0",
"text": "You could keep an eye on BankSimple perhaps? I think it looks interesting at least... too bad I don't live in the US... They are planning to create an API where you can do everything you can do normally. So when that is released you could probably create your own command-line interface :)",
"title": ""
},
{
"docid": "870f7f11ad028d9c36b07164d1596f6f",
"text": "\"> My issue understanding this is I've been told that banks actually don't hold 10% of the cash and lend the other 90% but instead hold the full 100% in cash and lend 900%. Is this accurate? That's the money multiplier effect being poorly described. You take a loan out, but that loan eventually makes its way to other banks as cash deposits, which then are loaned out, and go to other banks, and loaned, etc., so that the economy is \"\"running\"\" on 10x cash, where 1x is in physical cash, and the other 9x is in this deposit-loan-deposit phenomenon. > The issue I see with it is that it becomes exponential growth that is uncapped. Not true. If there is $1B outstanding \"\"physical\"\" cash (the money supply) with a 10% reserve, then the maximum amount of \"\"money\"\" flowing through is $1B / 10% = $10B. This assumes EVERYTHING legally possible is loaned out or saved in the banking system. As such, it represents a cap. If you have an Excel spreadsheet handy, you can easily model this out in four columns. Label the first row as follows: Deposit, Reserve, Cash Reserve, Loan Amount A2 will be your money supply. For simplicity, put $100. B2, your reserve column, will be 10%. C2 should be =A2 * B2, which will be the cash reserve in the bank. D2 should be A2 - C2, which is the new loan amount extended. A3 should be = D2, as the loans extended from the last step become deposits in the next. B3 = B2. Now, drag the formulas down, say, 500 rows. If you then sum the \"\"deposits\"\" column, it'll total $1,000. The cash reserve will total $100, and the loan amount will be $900. Thus, there is a cap.\"",
"title": ""
},
{
"docid": "e3597d5686151e5780cf14fe1fd20ac7",
"text": "Perfect super clear, thank you /u/xlct2 So it is like you buy a bond for $X, start getting interest, sell bond for $X :) I was thinking there could be a possibility of a bond working like a loan from a bank, that you going paying as time goes by :D",
"title": ""
},
{
"docid": "ac326aca2189c78f3ed3457661c6f291",
"text": "My daughter is two, and she has a piggy bank that regularly dines on my pocket change. When that bank is worth $100 or so I will make it a regular high yield savings account. Then I will either setup a regular $10/month transfer into it, or something depending on what we can afford. My plan is then to offer my kid an allowance when she can understand the concept of money. My clever idea is I will offer her a savings plan with the Bank of Daddy. If she lets me keep her allowance for the week, I will give her double the amount plus a percentage the next week. If she does it she will soon see the magic of saving money and how banks pay your for the privilege. I don't know when I will give her access to the savings account with actual cash. I will show it to her, and review it with her so she can track her money, but I need to know that she has some restraint before I open the gates to her.",
"title": ""
},
{
"docid": "308f51e6fffb971b0f16420cd23e042f",
"text": "For this scheme to work, you would require an investment with no chance of a loss. Money market accounts and short-term t-bills are about your only options. The other thing is that you will need to be very careful to never miss the payment date. One month's late charges will probably wipe out a few months' profit. The only other caveat, which I'm sure you've considered, is that having your credit maxed out will hurt your credit score.",
"title": ""
},
{
"docid": "92bc54545894a84958a397e020d8c194",
"text": "\"Nowadays, some banks in some countries offer things like temporary virtual cards for online payments. They are issued either free of charge or at a negligible charge, immediately, via bank's web interface (access to which might either be free or not, this varies). You get a separate account for the newly-issued \"\"card\"\" (the \"\"card\"\" being just a set of numbers), you transfer some money there (same web-interface), you use it to make payment(s), you leave $0 on that \"\"card\"\" and within a day or a month, it expires. Somewhat convenient and your possible loss is limited tightly. Check if your local banks offer this kind of service.\"",
"title": ""
},
{
"docid": "d8209f4c9de8d573f190b134f7b2fb0b",
"text": "\"What are the options available for safe, short-term parking of funds? Savings accounts are the go-to option for safely depositing funds in a way that they remain accessible in the short-term. There are many options available, and any recommendations on a specific account from a specific institution depend greatly on the current state of banks. As you're in the US, If you choose to save funds in a savings account, it's important that you verify that the account (or accounts) you use are FDIC insured. Also be aware that the insurance limit is $250,000, so for larger volumes of money you may need to either break up your savings into multiple accounts, or consult a Accredited Investment Fiduciary (AIF) rather than random strangers on the internet. I received an inheritance check... Money is a token we exchange for favors from other people. As their last act, someone decided to give you a portion of their unused favors. You should feel honored that they held you in such esteem. I have no debt at all and aside from a few deferred expenses You're wise to bring up debt. As a general answer not geared toward your specific circumstances: Paying down debt is a good choice, if you have any. Investment accounts have an unknown interest rate, whereas reducing debt is guaranteed to earn you the interest rate that you would have otherwise paid. Creating new debt is a bad choice. It's common for people who receive large windfalls to spend so much that they put themselves in financial trouble. Lottery winners tend to go bankrupt. The best way to double your money is to fold it in half and put it back in your pocket. I am not at all savvy about finances... The vast majority of people are not savvy about finances. It's a good sign that you acknowledge your inability and are willing to defer to others. ...and have had a few bad experiences when trying to hire someone to help me Find an AIF, preferably one from a largish investment firm. You don't want to be their most important client. You just want them to treat you with courtesy and give you simple, and sound investment advice. Don't be afraid to shop around a bit. I am interested in options for safe, short \"\"parking\"\" of these funds until I figure out what I want to do. Apart from savings accounts, some money market accounts and mutual funds may be appropriate for parking funds before investing elsewhere. They come with their own tradeoffs and are quite likely higher risk than you're willing to take while you're just deciding what to do with the funds. My personal recommendation* for your specific circumstances at this specific time is to put your money in an Aspiration Summit Account purely because it has 1% APY (which is the highest interest rate I'm currently aware of) and is FDIC insured. I am not affiliated with Aspiration. I would then suggest talking to someone at Vanguard or Fidelity about your investment options. Be clear about your expectations and don't be afraid to simply walk away if you don't like the advice you receive. I am not affiliated with Vanguard or Fidelity. * I am not a lawyer, fiduciary, or even a person with a degree in finances. For all you know I'm a dog on the internet.\"",
"title": ""
},
{
"docid": "3b92ddb76f337b877c0bd43c2cf267c2",
"text": "Another option is the new 'innovative finance isa' that allow you to put a wrapper round peer to peer lending platform investments. See Zopa, although I don't think they have come out with an ISA yet.",
"title": ""
},
{
"docid": "3fefad3681891f2aff20504b8134d854",
"text": "\"Yes, you can usually deposit/pay money into a credit card account in advance. They'll use it to pay any open debt; if there's money left over they'll carry it as a credit towards future changes. (\"\"Usually\"\" added in response to comments that some folks have been unable to do this -- though whether that was really policy or just limitation if web interface is unclear. Could be tested by simply sending them an overpayment as your next check and seeing whether they carry it as a credit or return the excess.)\"",
"title": ""
},
{
"docid": "6e6eb756cc10517e78138928fe576fa8",
"text": "\"Depositum irregulare is a Latin phrase that simply means \"\"irregular deposit.\"\" It's a concept from ancient Roman contract law that has a very narrow scope and doesn't actually apply to your example. There are two distinct parts to this concept, one dealing with the notion of a deposit and the other with the notion of irregularity. I'll address them both in turn since they're both relevant to the tax issue. I also think that this is an example of the XY problem, since your proposed solution (\"\"give my money to a friend for safekeeping\"\") isn't the right solution to your actual problem (\"\"how can I keep my money safe\"\"). The currency issue is a complication, but it doesn't change the fact that what you're proposing probably isn't a good solution. The key word in my definition of depositum irregulare is \"\"contract\"\". You don't mention a legally binding contract between you and your friend; an oral contract doesn't qualify because in the event of a breach, it's difficult to enforce the agreement. Legally, there isn't any proof of an oral agreement, and emotionally, taking your friend to court might cost you your friendship. I'm not a lawyer, but I would guess that the absence of a contract implies that even though in the eyes of you and your friend, you're giving him the money for \"\"safekeeping,\"\" in the eyes of the law, you're simply giving it to him. In the US, you would owe gift taxes on these funds if they're higher than a certain amount. In other words, this isn't really a deposit. It's not like a security deposit, in which the money may be held as collateral in exchange for a service, e.g. not trashing your apartment, or a financial deposit, where the money is held in a regulated financial institution like a bank. This isn't a solution to the problem of keeping your money safe because the lack of a contract means you incur additional risk in the form of legal risk that isn't present in the context of actual deposits. Also, if you don't have an account in the right currency, but your friend does, how are you planning for him to store the money anyway? If you convert your money into his currency, you take on exchange rate risk (unless you hedge, which is another complication). If you don't convert it and simply leave it in his safe, house, car boot, etc. you're still taking on risk because the funds aren't insured in the event of loss. Furthermore, the money isn't necessarily \"\"safe\"\" with your friend even if you ignore all the risks above. Without a written contract, you have little recourse if a) your friend decides to spend the money and not return it, b) your friend runs into financial trouble and creditors make claim to his assets, or c) you get into financial trouble and creditors make claims to your assets. The idea of giving money to another individual for safekeeping during bankruptcy has been tested in US courts and ruled invalid. If you do decide to go ahead with a contract and you do want your money back from your friend eventually, you're in essence loaning him money, and this is a different situation with its own complications. Look at this question and this question before loaning money to a friend. Although this does apply to your situation, it's mostly irrelevant because the \"\"irregular\"\" part of the concept of \"\"irregular deposit\"\" is a standard feature of currencies and other legal tender. It's part of the fungibility of modern currencies and doesn't have anything to do with taxes if you're only giving your friend physical currency. If you're giving him property, other assets, etc. for \"\"safekeeping\"\" it's a different issue entirely, but it's still probably going to be considered a gift or a loan. You're basically correct about what depositum irregulare means, but I think you're overestimating its reach in modern law. In Roman times, it simply refers to a contract in which two parties made an agreement for the depositor to deposit money or goods with the depositee and \"\"withdraw\"\" equivalent money or goods sometime in the future. Although this is a feature of the modern deposit banking system, it's one small part alongside contract law, deposit insurance, etc. These other parts add complexity, but they also add security and risk mitigation. Your arrangement with your friend is much simpler, but also much riskier. And yes, there probably are taxes on what you're proposing because you're basically giving or loaning the money to your friend. Even if you say it's not a loan or a gift, the law may still see it that way. The absence of a contract makes this especially important, because you don't have anything speaking in your favor in the event of a legal dispute besides \"\"what you meant the money to be.\"\" Furthermore, the money isn't necessarily safe with your friend, and the absence of a contract exacerbates this issue. If you want to keep your money safe, keep it in an account that's covered by deposit insurance. If you don't have an account in that currency, either a) talk to a lawyer who specializes in situation like this and work out a contract, or b) open an account with that currency. As I've stated, I'm not a lawyer, so none of the above should be interpreted as legal advice. That being said, I'll reiterate again that the concept of depositum irregulare is a concept from ancient Roman law. Trying to apply it within a modern legal system without a contract is a potential recipe for disaster. If you need a legal solution to this problem (not that you do; I think what you're looking for is a bank), talk to a lawyer who understands modern law, since ancient Roman law isn't applicable to and won't pass muster in a modern-day court.\"",
"title": ""
},
{
"docid": "b37c9c4fd5f5cccfc979693e5c5889fa",
"text": "\"This is a supplement to the additional answers. One way to generate \"\"passive\"\" income is by taking advantage of high interest checking / saving accounts. If you need to have a sum of liquid cash readily available at all times, you might as well earn the most interest you can while doing so. I'm not on any bank's payroll, so a Google search can yield a lot on this topic and help you decide what's in your best interest (pun intended). More amazingly, some banks will reward you straight in cash for simply using their accounts, barring some criteria. There's one promotion I've been taking advantage of which provides me $20/month flat, irrespective of my account balance. Again, I am not on anyone's payroll, but a Google search can be helpful here. I'd call these passive, as once you meet the promotion criteria, you don't need to do anything else but wait for your money. Of course, none of this will be enough to live off of, but any extra amount with minimal to zero time investment seems to be a good deal. (if people do want links for the claims I make, I will put these up. I just do not want to advertise directly for any banks or companies.)\"",
"title": ""
}
] |
fiqa
|
b541cf44bb9c7073f1f33f87fbcd67fd
|
Does high frequency trading (HFT) punish long-term investment?
|
[
{
"docid": "81672f347fadcd53ec6ff20a2ae9f470",
"text": "No, at least not noticeably so. The majority of what HFT does is to take advantage of the fact that there is a spread between buy and sell orders on the exchange, and to instantly fill both orders, gaining relatively risk-free profit from some inherent inefficiencies in how the market prices stocks. The end result is that intraday trading of the non-HFT nature, as well as speculative short-term trading will be less profitable, since HFT will cause the buy/sell spread to be closer than it would otherwise be. Buying and holding will be (largely) unaffected since the spread that HFT takes advantage of is miniscule compared to the gains a stock will experience over time. For example, when you go to buy shares intending to hold them for a long time, the HFT might cost you say, 1 to 2 cents per share. When you go to sell the share, HFT might cost you the same again. But, if you held it for a long time, the share might have doubled or tripled in value over the time you held it, so the overall effect of that 2-4 cents per share lost from HFT is negligible. However, since the HFT is doing this millions of times per day, that 1 cent (or more commonly a fraction of a cent) adds up to HFTs making millions. Individually it doesn't affect anyone that much, but collectively it represents a huge loss of value, and whether this is acceptable or not is still a subject of much debate!",
"title": ""
},
{
"docid": "3bce49c9f14e16724303feccaa0b44cf",
"text": "I disagree strongly with the other two answers posted thus far. HFT are not just liquidity providers (in fact that claim is completely bogus, considering liquidity evaporates whenever the market is falling). HFT are not just scalping for pennies, they are also trading based on trends and news releases. So you end up having imperfect algorithms, not humans, deciding the price of almost every security being traded. These algorithms data mine for news releases or they look for and make correlations, even when none exist. The result is that every asset traded using HFT is mispriced. This happens in a variety of ways. Algos will react to the same news event if it has multiple sources (Ive seen stocks soar when week old news was re-released), algos will react to fake news posted on Twitter, and algos will correlate S&P to other indexes such as VIX or currencies. About 2 years ago the S&P was strongly correlated with EURJPY. In other words, the American stock market was completely dependent on the exchange rate of two currencies on completely different continents. In other words, no one knows the true value of stocks anymore because the free market hasnt existed in over 5 years.",
"title": ""
},
{
"docid": "941aef807b75234d032142cb464d03de",
"text": "\"Not really. High frequency traders affect mainly short term investors. If everyone invested long-term and traded infrequently, there would be no high frequency trading. For a long term investor, you by at X, hold for several years, and sell at Y. At worst, high frequency trading may affect \"\"X\"\" and \"\"Y\"\" by a few pennies (and the changes may cancel out). For a long term trader that doesn't amount to a \"\"hill of beans\"\" It is other frequent traders that will feel the loss of those \"\"pennies.\"\"\"",
"title": ""
}
] |
[
{
"docid": "1479b4eb0af174904498d34db9675862",
"text": "\"I don't think that HFT is a game-changer for retail investors. It does mean that amateur daytraders need to pack it up and go home, because the HFT guys are smarter, faster and have more money than you. I'm no Warren Buffet, but I've done better in the market over the last 4 years than I ever have, and I've been actively investing since 1995. You need to do your research and understand what you're investing in. Barring outliers like the \"\"Flash Crash\"\", nothing has changed. You have a great opportunity to buy quality companies with long track records of generous dividends right now for the \"\"safe\"\" part of your portfolio. You have great value stock opportunities. You have great opportunities to take risks on good companies the will benefit from economic recovery. What has changed is that the \"\"set it and forget it\"\" advice that people blindly followed from magazines doesn't work anymore. If you expect to park your money in Index funds and don't manage your money, you're going to lose. Remember that saying \"\"Buy low, sell high\"\"? You buy low when everyone is freaked out and you hear Gold commercials 24x7 on the radio.\"",
"title": ""
},
{
"docid": "27c4e69d2f392f68687ad026b2b9ae91",
"text": "The stock market's principal justification is matching investors with investment opportunities. That's only reasonably feasible with long-term investments. High frequency traders are not interested in investments, they are interested in buying cheap and selling expensive. Holding reasonably robust shares for longer binds their capital which is one reason the faster-paced business of dealing with options is popular instead. So their main manner of operation is leeching off actually occuring investments by letting the investors pay more than the recipients of the investments receive. By now, the majority of stock market business is indirect and tries guessing where the money goes rather than where the business goes. For one thing, this leads to the stock market's evaluations being largely inflated over the actual underlying committed deals happening. And as the commitment to an investment becomes rare, the market becomes more volatile and instable: it's money running in circles. Fast trading is about running in front of where the money goes, anticipating the market. But if there is no actual market to anticipate, only people running before the imagination of other people running before money, the net payout converges to zero as the ratio of serious actual investments in tangible targets declines. By and large, high frequency trading converges to a Ponzi scheme, and you try being among the winners of such a scheme. But there are a whole lot of people competing here, and essentially the net payoff is close to zero due to the large volumes in circulation as opposed to what ends up in actual tangible investments. It's a completely different game with different rules riding on the original idea of a stock market. So you have to figure out what your money should be doing according to your plans.",
"title": ""
},
{
"docid": "daff22609d39d7ef7c465090f1d9b402",
"text": "\"Are you talking long-term institutional or retail investors? Long-term *retail* investors look for *orderly markets*, the antithesis of HFT business models, which have a direct correlation between market volatility and profits. To a lesser extent, some \"\"dumb money\"\"/\"\"muppet\"\" institutionals do as well. HFT firms tout they supply liquidity into markets, when in fact the opposite is true. Yes, HFTs supply liquidity, *but only when the liquidity's benefits runs in their direction*. That is, they are applying the part of the liquidity definition that mentions \"\"high trading activity\"\", and conveniently ignoring the part that simultaneously requires \"\"*easily* buying *or* selling an asset\"\". If HFT's are the new exchange floor, then they need to be formalized as such, *and become bound to market maker responsibilities*. If they are actually supplying liquidity, like real Designated Market Makers in the NYSE for example, they become responsible to supply a specified liquidity for specified ticker symbols in exchange for their informational advantage on those tickers. The indisputable fact is that HFT cannot exist at their current profit levels without the information advantage they gain with preferential access to tick-by-tick data unavailable to investors who cannot afford the exchange fees ($1M per exchange 10 years ago, more now). Restrict the entire market, including HFTs, to only second-by-second price data without the tick-by-tick depth, and they won't do so well. Don't get me wrong, I'm not knocking HFTs *per se*; I think they are a marvelous development, so long as they really do \"\"supply liquidity\"\". Right now, they aren't doing so, and especially in an orderly manner. If you want retail investors to keep out of the water as they are doing now, by all means let HFT (and regulatory capture, and a whole host of other financial service industry ills) run as they are. There are arguments to be made about \"\"only let the professionals play the market\"\", where there is no role for retail and anyone who doesn't know how to play the long-term investment game needs to get out of the kitchen. But if you are making such an argument, come out and say so.\"",
"title": ""
},
{
"docid": "03bc7edadda951c2a1ee39f827de7419",
"text": "I dont think you understand teh main problem with HFT, they locate the machines next to the exchanges, and make money based on the latency between exchanges. This is not something anyone but them can do. And it is basically cheating the market. You ever see the scam.. it was in mash, where [Frank Burns](http://www.funtrivia.com/en/subtopics/M*A*S*H----Out-of-Sight-Out-of-Mind-181978.html) got the radio signals early for the games, and would place bets on the games already knowing the outcome for when they were broadcast on official armed forces radio. That is what HFC is. They get the score of the game before you do, and then bet on the game. and yes they place a lot of orders and cancel them, that isnt to slow other traders but to force more latency on the system, so they can know the scores even earlier.",
"title": ""
},
{
"docid": "a45c4bac1eea28b1ec31818d9dbc1df1",
"text": "HFT doesn't increase correlations nor do hedge funds, If you look at the euro crisis, correlations have skyrocketed, then in late december jan and feb during the rally the correlations started subsiding and specific risk started taking over, crisis mode increases correlations.",
"title": ""
},
{
"docid": "89ce8330c1188a7e46ca04b2cc8cf14a",
"text": "> It will have minimal effects on buy & hold traders since they typically research for a long time, then buy & hold stock for many months. This is the part I never understand. If a short term tax doesn't affect buy & hold traders, when why would HFT affect buy & hold traders?",
"title": ""
},
{
"docid": "02cf1973bc8bfdb5930a3f0b20037ecd",
"text": "By exploiting institutional investors, HFT does hurt small investors. People with pension, mutual, and index funds get smaller returns. Endowment funds are also going to get hurt which hurts hospitals, schools, charities, and other institutions that work for the public good. I agree with you though. At this point we would likely be just arguing semantics.",
"title": ""
},
{
"docid": "2497dced1eee532e6563c6de5196b408",
"text": "It's not necessarily the case that HFT acts as a tax on small traders. I haven't seen any studies demonstrating that HFT increases the average cost of shares; if anything small investors will be largely unaffected by HFT as it will be random noise to them, sometimes creating a slight increase, sometimes a slight decrease. The people most affected by HFT are institutional investors, whom HFT desks are pretty good at predicting the order pattern of and hence exploiting. They have no interest or capacity to exploit the small guys.",
"title": ""
},
{
"docid": "35d6242a9c18d05aa1c2988f791bd14e",
"text": "In some senses, any answer to this question is going to be opinion based - nobody outside of HFT firms really know what they do, as they tend to be highly secretive due to the competitive nature of the activity they're engaged in. What's more, people working at HFT firms are bound by confidentiality agreements, so even those in the industry have no idea how other firms operate. And finally, there tend to be very, very few people at each firm who have any kind of overall picture of how things work. The hardware and software that is used to implement HFT is 'modular', and a developer will work on a single component, having no idea how it fits into a bigger machine (a programmer, for instance, might right routines to perform a function for variable 'k', but have absolutely no idea for what 'k' stands!) Keeping this in mind and returning to the question . . . The one thing that is well known about HFT is that it is done at incredibly high speeds, making very small profits many thousands of times per day. Activities are typically associated with market making and 'scalping' which profits from or within the bid-ask spread. Where does all this leave us? At worst, the average investor might get clipped for a few cents per round trip in a stock. Given that investing buy its very nature involves long holding periods and (hopefully) large gains, the dangers associated with the activities of HFT are negligible for the average trader, and can be considered no more than a slight markup in execution costs. A whole other area not really touched upon in the answers above is the endemic instability that HFT can bring to entire financial markets. HFT is associated with the provision of liquidity, and yet this liquidity can vanish very suddenly at times of market stress as the HFT remove themselves from the market; the possibility of lack of liquidity is probably the biggest market-wide danger that may arise from HFT operations.",
"title": ""
},
{
"docid": "4efe3b27a94d2a15b4ac20365bdb87a6",
"text": "There is no reason that HFT in itself should be illegal. It provides a significant advantage to companies with access to those automated systems but then, we might as well eschew NASDAQ and go back to the manually traded days of NYSE or ban day trading in favor of long-term investing. What is problematic is that they place and immediately cancel large number of orders for two reasons - to test the market and to slow down the competition. A proposed ~~tax~~fee to charge for cancellation of non-executed trades seems like an interesting solution.",
"title": ""
},
{
"docid": "395e4a466026a14fb6261c61f25969b5",
"text": "\"A lot of people have already explained that your assumptions are the issue, but I'll throw in my 2¢. There are a lot of people who do the opposite of long term investing. It's called high frequency trading. I'd recommend reading the Wikipedia article for more info, but very basically, high frequency traders use programs to determine which stocks to buy and which ones to sell. An example program might be \"\"buy if the stock is increasing and sell if I've held it more than 1 second.\"\"\"",
"title": ""
},
{
"docid": "35b71b8af2271d48d916b263296b0d80",
"text": "\"You're not making any sort of a persuasive argument why predatory HFT should be allowed to levy a tax on the system. Yes, computer trading is heck of a lot better than paper trading. You're citing an article from 2010 that states things I'm not even arguing against as somehow supporting this \"\"tax\"\", the mechanisms of which the author was not even aware of at the time. There is no valid argument for why this tax is necessary or good for anyone but the firms profiting from it. I'm all for lower spreads and near instant order processing, but that doesn't excuse some greedy prick from scalping my 401k a point or two of compounding interest every year.\"",
"title": ""
},
{
"docid": "58d36651cc5f1d4b3e8327bc4833378a",
"text": "\"If you're investing for the long term your best strategy is going to be a buy-and-hold strategy, or even just buying a few index funds in several major asset classes and forgetting about it. Following \"\"market conditions\"\" is about as useful to the long term trader as checking the weather in Anchorage, Alaska every day (assuming that you don't live in Anchorage, Alaska). Let me suggest treating yourself to a subscription to The Economist and read it once a week. You'll learn a lot more about investing, economics, and world trends, and you won't be completely in the dark if there are major structural changes in the world (like gigantic housing bubbles) that you might want to know about.\"",
"title": ""
},
{
"docid": "cdfc4ac08efcdf6c897b314dd526af49",
"text": "Not really. This just shows you might be able to fill huge orders a little less cheaply now, but then somebody else gets to fill it at a better price and the HFT/Market Maker took on some risk for some profit. That's pretty much the definition of providing liquidity. How is this raising prices? It's not. Raising prices would be if HFT bought like 2mil worth of stock and just held it for some time, then tried to sell it to you later at a better price, but that's just investing.",
"title": ""
},
{
"docid": "b89990eeba193697f81dbf2659aaadf4",
"text": "\"First it is worth noting the two sided nature of the contracts (long one currency/short a second) make leverage in currencies over a diverse set of clients generally less of a problem. In equities, since most margin investors are long \"\"equities\"\" making it more likely that large margin calls will all be made at the same time. Also, it's worth noting that high-frequency traders often highly levered make up a large portion of all volume in all liquid markets ~70% in equity markets for instance. Would you call that grossly artificial? What is that volume number really telling us anyway in that case? The major players holding long-term positions in the FX markets are large banks (non-investment arm), central banks and corporations and unlike equity markets which can nearly slow to a trickle currency markets need to keep trading just for many of those corporations/banks to do business. This kind of depth allows these brokers to even consider offering 400-to-1 leverage. I'm not suggesting that it is a good idea for these brokers, but the liquidity in currency markets is much deeper than their costumers.\"",
"title": ""
}
] |
fiqa
|
03939b4ac597234df33d749f18b3fba2
|
Should you diversify your bond investments across many foreign countries?
|
[
{
"docid": "0614273d91d85965c4ba9eaaef0c1251",
"text": "Adding international bonds to an individual investor's portfolio is a controversial subject. On top of the standard risks of bonds you are adding country specific risk, currency risk and diversifying your individual company risk. In theory many of these risks should be rewarded but the data are noisy at best and adding risk like developed currency risk may not be rewarded at all. Also, most of the risk and diversification mentioned above are already added by international stocks. Depending on your home country adding international or emerging market stock etfs only add a few extra bps of fees while international bond etfs can add 30-100bps of fees over their domestic versions. This is a fairly high bar for adding this type of diversification. US bonds for foreign investors are a possible exception to the high fees though the government's bonds yield little. If your home currency (or currency union) does not have a deep bond market and/or bonds make up most of your portfolio it is probably worth diversifying a chunk of your bond exposure internationally. Otherwise, you can get most of the diversification much more cheaply by just using international stocks.",
"title": ""
},
{
"docid": "cc493cfe1797cefdcc73b62863b7e062",
"text": "The Vanguard Emerging Market Bond Index has a SEC yield of 4.62%, an expense ratio of 0.34%, a purchase fee of 0.75%, and an average duration of 6.7 years. The Vanguard Emerging Market Bond Index only invests in US Dollar denominated securities, so it is not exposed to currency risk. The US Intermediate Term Bond Index Fund has a SEC yield of 2.59%, an expense ratio of 0.1% and an average duration of 6.5 years. So after expenses, the emerging market bond fund gives you 1.04% of extra yield (more in subsequent years as the purchase fee is only paid once). Here are the results of a study by Vanguard: Based on our findings, we believe that most investors should consider adding [currency risked] hedged foreign bonds to their existing diversified portfolios. I think a globally diversified bond portfolio results in a portfolio that's more diversified.",
"title": ""
}
] |
[
{
"docid": "296b7a2e96d632ad86e69f69b97d10fe",
"text": "It sounds like you are soliciting opinions a little here, so I'll go ahead and give you mine, recognizing that there's a degree of arbitrariness here. A basic portfolio consists of a few mutual funds that try to span the space of investments. My choices given your pot: I like VLTCX because regular bond index funds have way too much weight in government securities. Government bonds earn way too little. The CAPM would suggest a lot more weight in bonds and international equity. I won't put too much in bonds because...I just don't feel like it. My international allocation is artificially low because it's always a little more costly and I'm not sure how good the diversification gains are. If you are relatively risk averse, you can hold some of your money in a high-interest online bank and only put a portion in these investments. $100K isn't all that much money but the above portfolio is, I think, sufficient for most people. If I had a lot more I'd buy some REIT exposure, developing market equity, and maybe small cap. If I had a ton more (several million) I'd switch to holding individual equities instead of funds and maybe start looking at alternative investments, real estate, startups, etc.",
"title": ""
},
{
"docid": "38a479e3fac8a4d4deb5d8caa993d72a",
"text": "\"Having savings only in your home currency is relatively 'low risk' compared with other types of 'low diversification'. This is because, in a simple case, your future cash outflows will be in your home currency, so if the GBP fluctuates in value, it will (theoretically) still buy you the same goods at home. In this way, keeping your savings in the same currency as your future expenditures creates a natural hedge against currency fluctuation. This gets complicated for goods imported from other countries, where base price fluctuates based on a foreign currency, or for situations where you expect to incur significant foreign currency expenditures (retirement elsewhere, etc.). In such cases, you no longer have certainty that your future expenditures will be based on the GBP, and saving money in other currencies may make more sense. In many circumstances, 'diversification' of the currency of your savings may actually increase your risk, not decrease it. Be sure you are doing this for a specific reason, with a specific strategy, and not just to generally 'spread your money around'. Even in case of a Brexit, consider: what would you do with a bank account full of USD? If the answer is \"\"Convert it back to GBP when needed (in 6 months, 5 years, 30, etc.), to buy British goods\"\", then I wouldn't call this a way to reduce your risk. Instead, I would call it a type of investment, with its own set of risks associated.\"",
"title": ""
},
{
"docid": "9ba531704a6a6569d654bfcf27ce3fb7",
"text": "\"Morningstar is often considered a trusted industry standard when it comes to rating mutual funds and ETFs. They offer the same data-centric information for other investments as well, such as individual stocks and bonds. You can consult Morningstar directly if you like, but any established broker will usually provide you with Morningstar's ratings for the products it is trying to sell to you. Vanguard offers a few Emerging Markets stock and bond funds, some actively managed, some index funds. Other investment management companies (Fidelity, Schwab, etc.) presumably do as well. You could start by looking in Morningstar (or on the individual companies' websites) to find what the similarities and differences are among these funds. That can help answer some important questions: I personally just shove a certain percentage of my portfolio into non-US stocks and bonds, and of that allocation a certain fraction goes into \"\"established\"\" economies and a certain fraction into \"\"emerging\"\" ones. I do all this with just a few basic index funds, because the indices make sense (to me) and index funds cost very little.\"",
"title": ""
},
{
"docid": "59f0fb24483bf24e45448509eb2c3850",
"text": "\"Even though \"\"when the U.S. sneezes Canada catches a cold\"\", I would suggest considering a look at Canadian government bonds as both a currency hedge, and for the safety of principal — of course, in terms of CAD, not USD. We like to boast that Canada fared relatively better (PDF) during the economic crisis than many other advanced economies, and our government debt is often rated higher than U.S. government debt. That being said, as a Canadian, I am biased. For what it's worth, here's the more general strategy: Recognize that you will be accepting some currency risk (in addition to the sovereign risks) in such an approach. Consistent with your ETF approach, there do exist a class of \"\"international treasury bond\"\" ETFs, holding short-term foreign government bonds, but their holdings won't necessarily match the criteria I laid out – although they'll have wider diversification than if you invested in specific countries separately.\"",
"title": ""
},
{
"docid": "c5e0d911af62091f18a6573283d3b230",
"text": "would you say it's advisable to keep some of cash savings in a foreign currency? This is primarily opinion based. Given that we live in a world rife with geopolitical risks such as Brexit and potential EU breakup There is no way to predict what will happen in such large events. For example if one keeps funds outside on UK in say Germany in Euro's. The UK may bring in a regulation and clamp down all funds held outside of UK as belonging to Government or tax these at 90% or anything absurd that negates the purpose of keeping funds outside. There are example of developing / under developed economics putting absurd capital controls. Whether UK will do or not is a speculation. If you are going to spend your live in a country, it is best to invest in country. As normal diversification, you can look at keep a small amount invested outside of country.",
"title": ""
},
{
"docid": "eddf10b9b6dae95cbbd0441684ab2b0a",
"text": "Diversification is an important aspect of precious metals investing. Therefore I would suggest diversifying in a number of different ways:",
"title": ""
},
{
"docid": "817db6a727dc0ed4825fbb46bf03671e",
"text": "In a word, no. Diversification is the first rule of investing. Your plan has poor diversification because it ignores most of the economy (large cap stocks). This means for the expected return your portfolio would get, you would bear an unnecessarily large amount of risk. Large cap and small cap stocks take turns outperforming each other. If you hold both, you have a safer portfolio because one will perform well while the other performs poorly. You will also likely want some exposure to the bond market. A simple and diversified portfolio would be a total market index fund and a total bond market fund. Something like 60% in the equity and 40% in the bonds would be reasonable. You may also want international exposure and maybe exposure to real estate via a REIT fund. You have expressed some risk-aversion in your post. The way to handle that is to take some of your money and keep it in your cash account and the rest into the diversified portfolio. Remember, when people add more and more asset classes (large cap, international, bonds, etc.) they are not increasing the risk of their portfolio, they are reducing it via diversification. The way to reduce it even more (after you have diversified) is to keep a larger proportion of it in a savings account or other guaranteed investment. BTW, your P2P lender investment seems like a great idea to me, but 60% of your money in it sounds like a lot.",
"title": ""
},
{
"docid": "a487098eb5d373fc761b2f723dfdff16",
"text": "The problem is aggregating information from so many sources, countries, and economies. You are probably more aware of local laws, local tax changes, local economic performance, etc, so it makes sense that you'd be more in tune with your own country. If your intent is to be fully diversified, then buy a total world fund. A lot of hedge funds do what you are suggesting, but I think it requires either some serious math or some serious research. Note: I'm invested in emerging markets (EEM) for exactly the reason you suggest... diversification.",
"title": ""
},
{
"docid": "e374af4ed349a2931e35b34bac47367d",
"text": "It depends on how much diversification you think you need and what your mutual fund options are. For instance, picking an index fund already provides a fair amount of diversification, especially if you select a Total Market type of index (readily available from Fidelity and Vanguard, and many other fund families). Are you looking to balance domestic vs. international investments? You may want to add an international index fund to the mix. Feel that a particular sector has tremendous potential? Add a sector fund. This investment mix is up to you (or your investment advisor). However, depending on your Roth IRA mutual fund choices, some of these funds may have minimum investment requirement - $3k to open a fund's account, for instance. In that case, you'd have no choice but to put your entire investment into one fund, and wait for subsequent years where you'd be able to invest in other funds after providing additional contributions and/or reallocation any growth from your initial investment. One thing to look at is whether you have an option of putting some of your contributions into a money market account within the Roth IRA - you can then reallocate funds from that account into another fund after you can meet the minimum investment requirement. However, in my opinion, if you start out by investing in a solid, low-cost index fund from a reputable mutual fund company, you've already picked up most of the diversification you need - a single fund is enough.",
"title": ""
},
{
"docid": "6ee5094a258ae0377d39f8cdcfb21087",
"text": "\"Tricky question, basically, you just want to first spread risk around, and then seek abnormal returns after you understand what portions of your portfolio are influenced by (and understand your own investment goals) For a relevant timely example: the German stock exchange and it's equity prices are reaching all time highs, while the Greek asset prices are reaching all time lows. If you just invested in \"\"Europe\"\" your portfolio will experience only the mean, while suffering from exchange rate changes. You will likely lose because you arbitrarily invested internationally, for the sake of being international, instead of targeting a key country or sector. Just boils down to more research for you, if you want to be a passive investor you will get passive investor returns. I'm not personally familiar with funds that are good at taking care of this part for you, in the international markets.\"",
"title": ""
},
{
"docid": "787e561450535d93b98cac7b6f0088e2",
"text": "This is Ellie Lan, investment analyst at Betterment. To answer your question, American investors are drawn to use the S&P 500 (SPY) as a benchmark to measure the performance of Betterment portfolios, particularly because it’s familiar and it’s the index always reported in the news. However, going all in to invest in SPY is not a good investment strategy—and even using it to compare your own diversified investments is misleading. We outline some of the pitfalls of this approach in this article: Why the S&P 500 Is a Bad Benchmark. An “algo-advisor” service like Betterment is a preferable approach and provides a number of advantages over simply investing in ETFs (SPY or others like VOO or IVV) that track the S&P 500. So, why invest with Betterment rather than in the S&P 500? Let’s first look at the issue of diversification. SPY only exposes investors to stocks in the U.S. large cap market. This may feel acceptable because of home bias, which is the tendency to invest disproportionately in domestic equities relative to foreign equities, regardless of their home country. However, investing in one geography and one asset class is riskier than global diversification because inflation risk, exchange-rate risk, and interest-rate risk will likely affect all U.S. stocks to a similar degree in the event of a U.S. downturn. In contrast, a well-diversified portfolio invests in a balance between bonds and stocks, and the ratio of bonds to stocks is dependent upon the investment horizon as well as the individual's goals. By constructing a portfolio from stock and bond ETFs across the world, Betterment reduces your portfolio’s sensitivity to swings. And the diversification goes beyond mere asset class and geography. For example, Betterment’s basket of bond ETFs have varying durations (e.g., short-term Treasuries have an effective duration of less than six months vs. U.S. corporate bonds, which have an effective duration of just more than 8 years) and credit quality. The level of diversification further helps you manage risk. Dan Egan, Betterment’s Director of Behavioral Finance and Investing, examined the increase in returns by moving from a U.S.-only portfolio to a globally diversified portfolio. On a risk-adjusted basis, the Betterment portfolio has historically outperformed a simple DIY investor portfolio by as much as 1.8% per year, attributed solely to diversification. Now, let’s assume that the investor at hand (Investor A) is a sophisticated investor who understands the importance of diversification. Additionally, let’s assume that he understands the optimal allocation for his age, risk appetite, and investment horizon. Investor A will still benefit from investing with Betterment. Automating his portfolio management with Betterment helps to insulate Investor A from the ’behavior gap,’ or the tendency for investors to sacrifice returns due to bad timing. Studies show that individual investors lose, on average, anywhere between 1.2% to 4.3% due to the behavior gap, and this gap can be as high as 6.5% for the most active investors. Compared to the average investor, Betterment customers have a behavior gap that is 1.25% lower. How? Betterment has implemented smart design to discourage market timing and short-sighted decision making. For example, Betterment’s Tax Impact Preview feature allows users to view the tax hit of a withdrawal or allocation change before a decision is made. Currently, Betterment is the only automated investment service to offer this capability. This function allows you to see a detailed estimate of the expected gains or losses broken down by short- and long-term, making it possible for investors to make better decisions about whether short-term gains should be deferred to the long-term. Now, for the sake of comparison, let’s assume that we have an even more sophisticated investor (Investor B), who understands the pitfalls of the behavior gap and is somehow able to avoid it. Betterment is still a better tool for Investor B because it offers a suite of tax-efficient features, including tax loss harvesting, smarter cost-basis accounting, municipal bonds, smart dividend reinvesting, and more. Each of these strategies can be automatically deployed inside the portfolio—Investor B need not do a thing. Each of these strategies can boost returns by lowering tax exposure. To return to your initial question—why not simply invest in the S&P 500? Investing is a long-term proposition, particularly when saving for retirement or other goals with a time horizon of several decades. To be a successful long-term investor means employing the core principles of diversification, tax management, and behavior management. While the S&P might look like a ‘hot’ investment one year, there are always reversals of fortune. The goal with long-term passive investing—the kind of investing that Betterment offers—is to help you reach your investing goals as efficiently as possible. Lastly, Betterment offers best-in-industry advice about where to save and how much to save for no fee.",
"title": ""
},
{
"docid": "95738b7725dea352d912355a70fde454",
"text": "Diversification is a risk-mitigation strategy. When you invest in equities, you generally get a higher rate of return than a fixed income investment. But you have risks... a single company's market value can decline for all sorts of reasons, including factors outside of the control of management. Diversification lets you spread risk and concentrate on sectors that you feel offer the best value. Investing outside of your currency zone allows you to diversify more, but also introduces currency risks, which require a whole other level of understanding. Today, investing in emerging markets is very popular for US investors because these economies are booming and US monetary policy has been weakening the dollar for some time. A major bank failure in China or a flip to a strong dollar policy could literally implode those investments overnight. At the end of the day, invest in what you understand. Know the factors that can lower your investment value.",
"title": ""
},
{
"docid": "4020148b59bb0379647b59069ba0455c",
"text": "\"This paper by a Columbia business school professor says: The standard 60%/40% strategy outperforms a 100% bond or 100% stock strategy over the 1926-1940 period (Figure 5) and over the 1990-2011 period (Figure 6). This is based on actual market data from those periods. You can see the figures in the PDF. These are periods of 14 and 21 years, which is perhaps shorter than the amount of time money would sit in your IRA, but still a fairly long time. The author goes on with a lot of additional discussion and claims that \"\"under certain conditions, rebalancing will always outperform a buy-and-hold portfolio given sufficient time\"\". Of course, there are also many periods over which a given asset mix would underperform, so there are no guarantees here. I read your question as asking \"\"is there any data suggesting that rebalancing a diversified portfolio can outperform an all-in-one-asset-class portfolio\"\". There is some such data. However, if you're asking which investing strategy you should actually choose, you'd want to look at a lot of data on both sides. You're unlikely to find data that \"\"proves\"\" anything conclusively either way. It should also be noted that the rebalancing advantage described here (and in your question) is not specific to bonds. For instance, in theory, rebalancing between US and international stocks could show a similar advantage over an all-US or all-non-US portfolio. The paper contains a lot of additional discussion about rebalancing. It seems that your question is really about whether rebalancing a diverse portfolio is better than going all-in with one asset class, and this question is touched on throughout the paper. The author mentions that diversification and rebalancing strategies should be chosen not solely for their effect on mathematically-calculated returns, but for their match with your psychological makeup and tolerance for risk.\"",
"title": ""
},
{
"docid": "001308bb6898cc328653575ba51889b7",
"text": "Not to my knowledge. Often the specific location is diversified out of the fund because each major building company or real estate company attempts to diversify risk by spreading it over multiple geographical locations. Also, buyers of these smaller portfolios will again diversify by creating a larger fund to sell to the general public. That being said, you can sometimes drill down to the specific assets held by a real estate fund. That takes a lot of work: You can also look for the issuer of the bond that the construction or real estate company issued to find out if it is region specific. Hope that helps.",
"title": ""
},
{
"docid": "0070d47865283906801bfe5184170931",
"text": "How is this not a breach of contract? Is it just because the corporate entity you're doing business with has gone out of business? Isn't the parent company still responsible? It seems to me that if it's not, there are a lot of ways that an unethical person / company could game the system.",
"title": ""
}
] |
fiqa
|
5dfff5fac397672186a3876f78e6b42d
|
Virtual currency investment
|
[
{
"docid": "685f5af46c704157e62049b3b1eace69",
"text": "I don't know much about paypal or bitcoin, but I can provide a little information on BTC(Paypal I thought was just a service for moving real currency). BTC has an exchange, in which the price of a bitcoin goes up and down. You can invest in to it much like you would invest in the stock market. You can also invest in equipment to mine bitcoins, if you feel like that is worthwhile. It takes quite a bit of research and quite a bit of knowledge. If you are looking to provide loans with interest, I would look into P2P lending. Depending on where you live, you can buy portions of loans, and receive monthly payments with the similiar risk that credit card companies take on(Unsecured debt that can be cleared in bankruptcy). I've thrown a small investment into P2P lending and it has had average returns, although I don't feel like my investment strategy was optimal(took on too many high risk notes, a large portion of which defaulted). I've been doing it for about 8 months, and I've seen an APY of roughly 9%, which again I think is sub-optimal. I think with better investment strategy you could see closer to 12-15%, which could swing heavily with economic downturn. It's hard to say.",
"title": ""
}
] |
[
{
"docid": "ff2a2e28dcef5b4943d13de9f71cf09e",
"text": "\"This is the best tl;dr I could make, [original](https://news.bitcoin.com/britain-largest-broker-exchange-traded-bitcoin-investments/) reduced by 86%. (I'm a bot) ***** > On Thursday, June 1, two bitcoin investments were added to Hargreaves Lansdown's platform; Bitcoin Tracker One and Bitcoin Tracker Eur. > The foreign exchange rate risk for Bitcoin Tracker One is USD/SEK whereas it is USD/EUR for Bitcoin Tracker Eur. > While the certificates are denominated in SEK and EUR, they track the price of bitcoin in USD. "As the BTC/USD market is the most liquid bitcoin market widely available for trading, we regard it as the most suitable underlying asset in a bitcoin product," the company explained. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/6ewbj3/britains_largest_broker_offers_exchangetraded/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ \"\"Version 1.65, ~135054 tl;drs so far.\"\") | [Theory](http://np.reddit.com/r/autotldr/comments/31bfht/theory_autotldr_concept/) | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr \"\"PM's and comments are monitored, constructive feedback is welcome.\"\") | *Top* *keywords*: **bitcoin**^#1 **accounts**^#2 **Hargreaves**^#3 **Lansdown**^#4 **track**^#5\"",
"title": ""
},
{
"docid": "500aba91d79281094dbadba775df5b7a",
"text": "I'm using iBank on my Mac here and that definitely supports different currencies and is also supposed to be able to track investments (I haven't used it to track investments yet, hence the 'supposed to' caveat).",
"title": ""
},
{
"docid": "45c3cb28491d6b35f3219f442d3100a6",
"text": "\"These have the potential to become \"\"end-of-the-world\"\" scenarios, so I'll keep this very clear. If you start to feel that any particular investment may suddenly become worthless then it is wise to liquidate that asset and transfer your wealth somewhere else. If your wealth happens to be invested in cash then transferring that wealth into something else is still valid. Digging a hole in the ground isn't useful and running for the border probably won't be necessary. Consider countries that have suffered actual currency collapse and debt default. Take Zimbabwe, for example. Even as inflation went into the millions of percent, the Zimbabwe stock exchange soared as investors were prepared to spend ever-more of their devaluing currency to buy stable stocks in a small number of locally listed companies. Even if the Euro were to suffer a critical fall, European companies would probably be ok. If you didn't panic and dig caches in the back garden over the fall of dotcom, there is no need to panic over the decline of certain currencies. Just diversify your risk and buy non-cash (or euro) assets. Update: A few ideas re diversification: The problem for Greece isn't really a euro problem; it is local. Local property, local companies ... these can be affected by default because no-one believes in the entirety of the Greek economy, not just the currency it happens to be using - so diversification really means buying things that are outside Greece.\"",
"title": ""
},
{
"docid": "c760adde250dd20b09e0e032b5bdd9d6",
"text": "When you buy a currency via FX market, really you are just exchanging one country's currency for another. So if it is permitted to hold one currency electronically, surely it must be permitted to hold a different country's currency electronically.",
"title": ""
},
{
"docid": "bc1a531e3572ae0bf4d32e289fb386cd",
"text": "Look at the tech used. Bitcoin a a very small numbers of others have their own blockchains. Nearly all other coins are build on Ethereum. Ethereum is like iOS for blockchain. One can build public or private versions. The fuel / transaction currency of the Ethereum playtform (think distributed operating system) is Ether (ETH) The Ethereum guys were very clever and got the biggest guys deploy involved multi nationals.. all of them have internal ethereum projects to capture their internal value distribution systems. But they ar already planning to manage their entire supply chains with this. Think: Com apt gets order via a distributed order system . These orders are legally binding. The bank of the producer Dan now issue a credit that flows through 10+ levels to suppliers and they will no longer be held up by week if not Knoth long PurchaseOrder processes. => these types of things is what the EEA is all about. At the same time Ethereum has much different transaction limits and has clear paths for increasing those limits in the future. Try transfer bitcoin right now :) The might be a lot of other coins coming but the question you should ask yourself: do you think there will there be a competitor to ethereum as general purpose blockchain platform in any near / mid term future. Keep in mind the already unreadably broad industry support. Depending on the answer to this ether is a great investment or just a good one. Either way :) Not saying other coins won't yield higher results but if you invest in the ether you invest in the currency the transactions of the others us calculated at. So in any case if you believe in coins there will be uptake on Ether.",
"title": ""
},
{
"docid": "0ee003abb9d3d266789513d9d7673856",
"text": "\"Edit: I discovered Bitcoin a few months after I posted this answer. I would strongly recommend anyone interested in this question to review it, particularly the myths page that dispels much of the FUD. Original answer: Although it is not online, as a concept the Totnes Pound may be of interest to you. I live quite close to this village (in the UK) and the system it promotes does work well. According to the Transition Town Totnes website this means that it is \"\"a community in a process of imagining and creating a future that addresses the twin challenges of diminishing oil and gas supplies and climate change , and creates the kind of community that we would all want to be part of.\"\" If you are looking for a starting place to introduce a new type of currency, perhaps in response to over-dependence on oil and global trade, then reading about the Transition Towns initiative could provide you with the answers you're looking for.\"",
"title": ""
},
{
"docid": "f23a77c2c5432db5c7cf786f6e890560",
"text": "I find this site to be really poor for the virtual play portion, especially the options league. After you place a trade, you can't tell what you actually traded. The columns for Exp and type are blank. I have had better luck with OptionsXpress virtual trader. Although they have recently changed their criteria for a non funded accounts and will only keep them active for 90 days. I know the cboe has a paper trading platform but I haven't tried it out yet.",
"title": ""
},
{
"docid": "a6a908e79622930b75bd84c3ed3768c8",
"text": "Peer to peer lending such as Kiva, Lending Club, Funding Circle(small business), SoFi(student loans), Prosper, and various other services provide you with access to the 'basic form' of investing you described in your question. Other funds: You may find the documentary '97% Owned' fascinating as it provides an overview of the monetary system of England, with parallels to US, showing only 3% of money supply is used in exchange of goods and services, 97% is engaged in some form of speculation. If speculative activities are of concern, you may need to denounce many forms of currency. Lastly, be careful of taking the term addiction too lightly and deeming something unethical too quickly. You may be surprised to learn there are many people like yourself working at 'unethical' companies changing them within.",
"title": ""
},
{
"docid": "e4bfed0d60b7aad95ded1939b5bb5c18",
"text": "I like precious metals and real estate. For the OP's stated timeframe and the effects QE is having on precious metals, physical silver is not a recommended short term play. If you believe that silver prices will fall as QE is reduced, you may want to consider an ETF that shorts silver. As for real estate, there are a number of ways to generate profit within your time frame. These include: Purchase a rental property. If you can find something in the $120,000 range you can take a 20% mortgage, then refinance in 3 - 7 years and pull out the equity. If you truly do not need the cash to purchase your dream home, look for a rental property that pays all the bills plus a little bit for you and arrange a mortgage of 80%. Let your money earn money. When you are ready you can either keep the property as-is and let it generate income for you, or sell and put more than $100,000 into your dream home. Visit your local mortgage broker and ask if he does third-party or private lending. Ask about the process and if you feel comfortable with him, let him know you'd like to be a lender. He will then find deals and present them to you. You decide if you want to participate or not. Private lenders are sometimes used for bridge financing and the loan amortizations can be short (6 months - 5 years) and the rates can be significantly higher than regular bank mortgages. The caveat is that as a second-position mortgage, if the borrower goes bankrupt, you're not likely to get your principal back.",
"title": ""
},
{
"docid": "fd9a98455fed7756d4b3f2fb56ea0aca",
"text": "How long is a piece of string? This will depend on many variables. How many trades will you make in a day? What income would you be expecting to make? What expectancy do you need to achieve? Which markets you will choose to trade? Your first step should be to develop a Trading Plan, then develop your trading rules and your risk management. Then you should back test your strategy and then use a virtual account to practice losing on. Because one thing you will get is many losses. You have to learn to take a loss when the market moves against you. And you need to let your profits run and keep your losses small. A good book to start with is Trade Your Way to Financial Freedom by Van Tharp. It will teach you about Expectancy, Money Management, Risk Management and the Phycology of Trading. Two thing I can recommend are: 1) to look into position and trend trading and other types of short term trading instead of day trading. You would usually place your trades after market close together with your stops and avoid being in front of the screen all day trying to chase the market. You need to take your emotion out of your trading if you want to succeed; 2) don't trade penny stocks, trade commodities, FX or standard stocks, but keep away from penny stocks. Just because you can buy them for a penny does not mean they are cheap.",
"title": ""
},
{
"docid": "f595b075ccb746ad463a41920df329a2",
"text": "\"In 2014 the IRS announced that it published guidance in Notice 2014-21. In that notice, the answer to the first question describes the general tax treatment of virtual currency: For federal tax purposes, virtual currency is treated as property. General tax principles applicable to property transactions apply to transactions using virtual currency. As it's property like any other, capital gains if and when you sell are taxed. But there's nothing illegal or nefarious about it, and while you might get some odd questions if a large deposit ends up in your bank account, as long as you answer them there really isn't a problem. If you don't have documentation of how much you paid for it, if it's a trivial amount compared to what it's worth now you can just declare $0 as your basis. I would suggest you try to have documentation that you've held it at least one year so that it's a long-term capital gain, but you can just mark the purchase date as \"\"Various\"\" on your tax form. I've done this (for a much smaller amount of bitcoins, alas) and haven't run into any trouble. While there are some good reasons to sell slowly, as others are saying, I want to play devil's advocate for a minute and give you a reason to sell quickly: A decision to hold is equivalent to a decision to buy. That is, if a million dollars randomly ended up in your bank account for no reason, you probably wouldn't choose to go put it all into bitcoin, and then slowly sell it. Yet that's more-or-less an equivalent financial situation to holding on to the bitcoin and slowly selling it. While there are certainly tax advantages to selling over the course of many years, bitcoin is one of the most volatile commodities out there, and one has no idea what will happen over the next few weeks, let alone the next few years. It may go to tens of thousands of dollars a coin, or it may go to basically zero. If I had a million dollars in my pocket, bitcoin isn't how I'd choose to store it all. Just something to think about; obviously you need to make the best choice for you for yourself.\"",
"title": ""
},
{
"docid": "6057489b63d4a6078034e2f58b3fe5f7",
"text": "I'm not sure, but I think the monetary system of Second Life or World of Warcraft would correspond to what you are looking for. I don't think they are independent of the dollar though, since acquiring liquidity in those games can be done through exchange for real dollars. But there can be more closed systems, maybe Sim type games where this is not the case. I hope this helps.",
"title": ""
},
{
"docid": "256281edbae94c0904bd9b4f76f8fe41",
"text": "\"This is the best tl;dr I could make, [original](https://mobile.nytimes.com/2017/08/03/style/what-is-cryptocurrency.html?referer=) reduced by 95%. (I'm a bot) ***** > Most readers have probably heard of Bitcoin, the digital coin that dominates the cryptocurrency market. > As traditional paths to upper-middle-class stability are being blocked by debt, exorbitant housing costs and a shaky job market, these investors view cryptocurrency not only as a hedge against another Dow Jones crash, but also as the most rational - and even utopian - means of investing their money. > Assuming one's money is protected, there are, of course, the standard risks of investing, amplified by the volatility of cryptocurrency. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/6sdv1b/ethereum_in_nytimes/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ \"\"Version 1.65, ~186120 tl;drs so far.\"\") | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr \"\"PM's and comments are monitored, constructive feedback is welcome.\"\") | *Top* *keywords*: **cryptocurrency**^#1 **market**^#2 **money**^#3 **coin**^#4 **invest**^#5\"",
"title": ""
},
{
"docid": "44c1a694da5c07c973e7e50b0180cf2c",
"text": "According to your post, you bought seven shares of VBR at $119.28 each on August 23rd. You paid €711,35. Now, on August 25th, VBR is worth $120.83. So you have But you want to know what you have in EUR, not USD. So if I ask Google how much $845.81 is in EUR, it says €708,89. That's even lower than what you're seeing. It looks like USD has fallen in value relative to EUR. So while the stock price has increased in dollar terms, it has fallen in euro terms. As a result, the value that you would get in euros if you sold the stock has fallen from the price that you paid. Another way of thinking about this is that your price per share was €101,72 and is now €101,33. That's actually a small drop. When you buy and sell in a different currency that you don't actually want, you add the currency risk to your normal risk. Maybe that's what you want to do. Or maybe you would be better off sticking to euro-denominated investments. Usually you'd do dollar-denominated investments if some of your spending was in dollars. Then if the dollar goes up relative to the euro, your investment goes up with it. So you can cash out and make your purchases in dollars without adding extra money. If you make all your purchases in euros, I would normally recommend that you stick to euro-denominated investments. The underlying asset might be in the US, but your fund could still be in Europe and list in euros. That's not to say that you can't buy dollar-denominated investments with euros. Clearly you can. It's just that it adds currency risk to the other risks of the investment. Unless you deliberately want to bet that USD will rise relative to EUR, you might not want to do that. Note that USD may rise over the weekend and put you back in the black. For that matter, even if USD continues to fall relative to the EUR, the security might rise more than that. I have no opinion on the value of VBR. I don't actually know what that is, as it doesn't matter for the points I was making. I'm not saying to sell it immediately. I'm saying that you might prefer euro-denominated investments when you buy in the future. Again, unless you are taking this particular risk deliberately.",
"title": ""
},
{
"docid": "2e963a985a9bfcb61d6590bd0e46d14d",
"text": "Try something like this: http://www.halifax.co.uk/sharedealing/our-accounts/fantasy-trader/ Virtual or fantasy trading is a great way to immerse yourself in that world and not lose your money whilst you make basic mistakes. Once real money is involved, there are some online platforms that are cheaper for lower amount investing than others. This article is a good, recent starting point for you: http://www.thisismoney.co.uk/money/diyinvesting/article-1718291/Pick-best-cheapest-investment-Isa-platform.html Best of luck in the investment casino! (And only risk money you can afford to lose - as with any form of investment, gambling, etc)",
"title": ""
}
] |
fiqa
|
a2421c9cde73bb37217bf8228b997018
|
Question about Tax Information from a Prospectus
|
[
{
"docid": "db75040036c8e3988feaa05c0ecf8ed7",
"text": "At the end of each calendar year the mutual fund company will send you a 1099 form. It will tell you and the IRS what your account earned. You will see boxes for: You will end up paying taxes on these, unless the fund is part of a 401K or IRA. These taxes will be due even if you never sold any shares. They are due even if it was a bad year and the value of your account went down. Most if not all states will levy an income tax yon your dividends and capital gains each year. When you sell your shares you may also owe income taxes if you made a profit. The actual taxes due is a more complex calculation due to long term vs short term, and what other gains or losses you have. Partial sales also take into account which shares are sold.",
"title": ""
},
{
"docid": "ab9d23b9c64bf48c909c67f1f807bef8",
"text": "\"A mutual fund could make two different kinds of distributions to you: Capital gains: When the fund liquidates positions that it holds, it may realize a gain if it sells the assets for a greater price than the fund purchased them for. As an example, for an index fund, assets may get liquidated if the underlying index changes in composition, thus requiring the manager to sell some stocks and purchase others. Mutual funds are required to distribute most of their income that they generate in this way back to its shareholders; many often do this near the end of the calendar year. When you receive the distribution, the gains will be categorized as either short-term (the asset was held for less than one year) or long-term (vice versa). Based upon the holding period, the gain is taxed differently. Currently in the United States, long-term capital gains are only taxed at 15%, regardless of your income tax bracket (you only pay the capital gains tax, not the income tax). Short-term capital gains are treated as ordinary income, so you will pay your (probably higher) tax rate on any cash that you are given by your mutual fund. You may also be subject to capital gains taxes when you decide to sell your holdings in the fund. Any profit that you made based on the difference between your purchase and sale price is treated as a capital gain. Based upon the period of time that you held the mutual fund shares, it is categorized as a short- or long-term gain and is taxed accordingly in the tax year that you sell the shares. Dividends: Many companies pay dividends to their stockholders as a way of returning a portion of their profits to their collective owners. When you invest in a mutual fund that owns dividend-paying stocks, the fund is the \"\"owner\"\" that receives the dividend payments. As with capital gains, mutual funds will redistribute these dividends to you periodically, often quarterly or annually. The main difference with dividends is that they are always taxed as ordinary income, no matter how long you (or the fund) have held the asset. I'm not aware of Texas state tax laws, so I can't comment on your other question.\"",
"title": ""
}
] |
[
{
"docid": "b9d65921f3dd4bb75d269ea1873d8ddf",
"text": "The default is FIFO: first in - first out. Unless you specifically instruct the brokerage otherwise, they'll report that the lot you've sold is of Jan 5, 2011. Note, that before 2011, they didn't have to report the cost basis to the IRS, and it would be up to you to calculate the cost basis at tax time, but that has been changed in 2011 and you need to make sure you've instructed the brokerage which lot exactly you're selling. I'm assuming you're in the US, in other places laws may be different.",
"title": ""
},
{
"docid": "4d8138041b3ccb69d73a2e767b142572",
"text": "\"Not sure I understood, so I'll summarize what I think I read: You got scholarship X, paid tuition Y < X, and you got 1098T to report these numbers. You're asking whether you need to pay taxes on (X-Y) that you end up with as income. The answer is: of course. You can have even lower tax liability if you don't include the numbers on W2, right? So why doesn't it occur to you to ask \"\"if I don't include W2 in the software, it comes up with a smaller tax - do I need to include it?\"\"?\"",
"title": ""
},
{
"docid": "bc9c200f6660dd9981ab887eb936190c",
"text": "I think the IRS doc you want is http://www.irs.gov/publications/p550/ch04.html#en_US_2010_publink100010601 I believe the answers are:",
"title": ""
},
{
"docid": "7c2718faab7ee5008d2257c0669ca216",
"text": "\"I'm assuming that by saying \"\"I'm a US resident now\"\" you're referring to the residency determination for tax purposes. Should I file a return in the US even though there is no income here ? Yes. US taxes its residents for tax purposes (which is not the same as residents for immigration or other purposes) on worldwide income. If yes, do I get credits for the taxes I paid in India. What form would I need to submit for the same ? I am assuming this form has to be issued by IT Dept in India or the employer in India ? The IRS doesn't require you to submit your Indian tax return with your US tax return, however they may ask for it later if your US tax return comes under examination. Generally, you claim foreign tax credits using form 1116 attached to your tax return. Specifically for India there may also be some clause in the Indo-US tax treaty that might be relevant to you. Treaty claims are made using form 8833 attached to your tax return, and I suggest having a professional (EA/CPA licensed in your State) prepare such a return. Although no stock transactions were done last year, should I still declare the value of total stocks I own ? If so what is an approx. tax rate or the maximum tax rate. Yes, this is done using form 8938 attached to your tax return and also form 114 (FBAR) filed separately with FinCEN. Pay attention: the forms are very similar with regard to the information you provide on them, but they go to different agencies and have different filing requirements and penalties for non-compliance. As to tax rates - that depends on the types of stocks and how you decide to treat them. Generally, the tax rate for PFIC is very high, so that if any of your stocks are classified as PFIC - you'd better talk to a professional tax adviser (EA/CPA licensed in your State) about how to deal with them. Non-PFIC stocks are dealt with the same as if they were in the US, unless you match certain criteria described in the instructions to form 5471 (then a different set of rules apply, talk to a licensed tax adviser). I will be transferring most of my stock to my father this year, will this need to be declared ? Yes, using form 709. Gift tax may be due. Talk to a licensed tax adviser (EA/CPA licensed in your State). I have an apartment in India this year, will this need to be declared or only when I sell the same later on ? If there's no income from it - then no (assuming you own it directly in your own name, for indirect ownership - yes, you do), but when you sell you will have to declare the sale and pay tax on the gains. Again, treaty may come into play, talk to a tax adviser. Also, be aware of Section 121 exclusion which may make it more beneficial for you to sell earlier.\"",
"title": ""
},
{
"docid": "bbf1a4e9a95e8154a0e768606992b801",
"text": "I gift my daughter stock worth $1000. No tax issue. She sells it for $2000, and has a taxable gain of $1000 that shows up on her return. Yes, you need to find out the date of the gift, as that is the date you value the fund for cost basis. The $3500 isn't a concern, as the gift seems to have been given well before that. It's a long term capital gain when you sell it. And, in a delightfully annoying aspect of our code, the dividends get added to basis each year, as you were paying tax on the dividend whether or not you actually received it. Depending on the level of dividends, your basis may very well be as high as the $6500 current value. (pls ask if anything here needs clarification)",
"title": ""
},
{
"docid": "cfdd30822408ce6a64caca92a58fd09d",
"text": "I assume I can/will need to file an 83(b) election, in order to avoid tax repercussions? What exactly will this save me from? 83(b) election is for restricted stock grants, not for stock purchases. For restricted stocks, you generally pay income tax when they vest. For startups the price difference between the time of the grant and the time of the vesting can be astronomical and by choosing 83(b) you effectively pay income tax on the value of the grant instead of the value of the vest. Then, you only pay capital gains tax on the difference between the sale price and the grant value when you sell. In your case you're exercising an option, i.e.: you're buying a stock, so 83(b) is irrelevant. What you will pay though is the tax on the difference between the strike price and the stock FMV (unless the stocks you end up buying are restricted - which would have been the case if you exercised your options early, but I don't think is going to be the case now). What steps should I take to (in the eyes of the law) guarantee that the board has received my execution notice? The secretary of the board is a notorious procrastinator and can be very unorganized. You should read what the grant contract/company policy says on that. Ask the HR/manager. Usually, a certified letter with return receipt should be enough, but you should verify the format, the address, and the timeframe.",
"title": ""
},
{
"docid": "d1e92a6e17ba78551b7fd1703fae444c",
"text": "\"Are these all of the taxes or is there any additional taxes over these? Turn-over tax is not for retail investors. Other taxes are paid by the broker as part of transaction and one need not worry too much about it. Is there any \"\"Income tax\"\" to be paid for shares bought/holding shares? No for just buying and holding. However if you buy and sell; there would be a capital gain or loss. In stocks, if you hold a security for less than 1 year and sell it; it is classified as short term capital gain and taxes at special rate of 15%. The loss can be adjusted against any other short term gain. If held for more than year it is long term capital gain. For stock market, the tax is zero, you can't adjust long term losses in stock markets. Will the money received from selling shares fall under \"\"Taxable money for FY Income tax\"\"? Only the gain [or loss] will be tread as income not the complete sale value. To calculate gain, one need to arrive a purchase price which is price of stock + Brokerage + STT + all other taxes. Similar the sale price will be Sales of stock - Brokerage - STT - all other taxes. The difference is the gain. Will the \"\"Dividend/Bonus/Buy-back\"\" money fall under taxable category? Dividend is tax free to individual as the company has already paid dividend distribution tax. Bonus is tax free event as it does not create any additional value. Buy-Back is treated as sale of shares if you have participated. Will the share-holder pay \"\"Dividend Distribution Tax\"\"? Paid by the company. What is \"\"Capital Gains\"\"? Profit or loss of buying and selling a particular security.\"",
"title": ""
},
{
"docid": "b622bc6d4c5c0e320f76c82c2ef0411a",
"text": "\"SEC filings do not contain this information, generally. You can find intangible assets on balance sheets, but not as detailed as writing down every asset separately, only aggregated at some level (may be as detailed as specifying \"\"patents\"\" as a separate line, although even that I wouldn't count on). Companies may hold different rights to different patents in different countries, patents are being granted and expired constantly, and unless this is a pharma industry or a startup - each single patent doesn't have a critical bearing on the company performance.\"",
"title": ""
},
{
"docid": "fa1a7d4b336581a906ccd15d29d2bb78",
"text": "\"Financial statements provide a large amount of specialized, complex, information about the company. If you know how to process the statements, and can place the info they provide in context with other significant information you have about the market, then you will likely be able to make better decisions about the company. If you don't know how to process them, you're much more likely to obtain incomplete or misleading information, and end up making worse decisions than you would have before you started reading. You might, for example, figure out that the company is gaining significant debt, but might be missing significant information about new regulations which caused a one time larger than normal tax payment for all companies in the industry you're investing in, matching the debt increase. Or you might see a large litigation related spending, without knowing that it's lower than usual for the industry. It's a chicken-and-egg problem - if you know how to process them, and how to use the information, then you already have the answer to your question. I'd say, the more important question to ask is: \"\"Do I have the time and resources necessary to learn enough about how businesses run, and about the market I'm investing in, so that financial statements become useful to me?\"\" If you do have the time, and resources, do it, it's worth the trouble. I'd advise in starting at the industry/business end of things, though, and only switching to obtaining information from the financial statements once you already have a good idea what you'll be using it for.\"",
"title": ""
},
{
"docid": "e0a23b436069fb1ebdb4e83095041424",
"text": "\"You should contact the company and the broker about the ownership. Do you remember ever selling your position? When you look back at your tax returns/1099-B forms - can you identify the sale? It should have been reported to you, and you should have reported it to the IRS. If not - then you're probably still the owner. As to K-1 - the income reported doesn't have to be distributed to you. Partnership is a pass-through entity, and cannot \"\"accumulate\"\" earnings for tax purposes, everything is deemed distributed. If, however, it is not actually distributed - you're still taxed on the income, but it is added to your basis in the partnership and you get the tax \"\"back\"\" when you sell your position. However, you pay income tax on the income based on the kind of the income, and on the sale - at capital gains rates. So the amounts added to your position will reduce your capital gains tax, but may be taxed at ordinary rates. Get a professional advice on the issue and what to do next, talk to a EA/CPA licensed in New York.\"",
"title": ""
},
{
"docid": "e7416d510ca61428b034926cf72ad7b2",
"text": "\"Appears to be a hypothetical question and not really worth answering but... Must it be explained.. no, not until audited. It's saying that for everything reported on a tax return, people have to include an explanation for everything, which you do not, unless you want to make some type of 'disclosure' which is a different matter. Must it be reported.. Yes, based on info presented. All income is taxable unless \"\"specifically exempted\"\" per the US Tax code or court cases. Gift vs Found Income... it's not 'found' income as someone gave (gifted) the money to him. Generally, gifts received are not taxable and don't have to be reported.\"",
"title": ""
},
{
"docid": "79bd1f7fa03d24bd2c00af21a84a8ba9",
"text": "isn't the answer in the question? it says the company starts officially NEXT year, yet it is asking for the net present value...i.e what that project is worth today. it could be that funds for that particular project may not be necessary for another year, but there may be other projects to evaluate against today.",
"title": ""
},
{
"docid": "9a9d932f7e317e965f944a41ec48a41d",
"text": "I can make that election to pay taxes now (even though they aren't vested) based on the dollar value at the time they are granted? That is correct. You must file the election with the IRS within 30 days after the grant (and then attach a copy to that year's tax return). would I not pay any taxes on the gains because I already claimed them as income? No, you claim income based on the grant value, the gains after that are your taxable capital gains. The difference is that if you don't use 83(b) election - that would not be capital gains, but rather ordinary salary income. what happens if I quit / get terminated after paying taxes on un-vested shares? Do I lose those taxes, or do I get it back in a refund next year? Or would it be a deduction next year? You lose these taxes. That's the risk you're taking. Generally 83(b) election is not very useful for RSUs of established public companies. You take a large risk of forfeited taxes to save the difference between capital gains and ordinary gains, which is not all that much. It is very useful when you're in a startup with valuations growing rapidly but stocks not yet publicly trading, which means that if you pay tax on vest you'll pay much more and won't have stocks to sell to cover for that, while the amounts you put at risk are relatively small.",
"title": ""
},
{
"docid": "36fcccad5602fec5364f2c1f4e6d3235",
"text": "Generally stock trades will require an additional Capital Gains and Losses form included with a 1040, known as Schedule D (summary) and Schedule D-1 (itemized). That year I believe the maximum declarable Capital loss was $3000--the rest could carry over to future years. The purchase date/year only matters insofar as to rank the lot as short term or long term(a position held 365 days or longer), short term typically but depends on actual asset taxed then at 25%, long term 15%. The year a position was closed(eg. sold) tells you which year's filing it belongs in. The tiny $16.08 interest earned probably goes into Schedule B, typically a short form. The IRS actually has a hotline 800-829-1040 (Individuals) for quick questions such as advising which previous-year filing forms they'd expect from you. Be sure to explain the custodial situation and that it all recently came to your awareness etc. Disclaimer: I am no specialist. You'd need to verify everything I wrote; it was just from personal experience with the IRS and taxes.",
"title": ""
},
{
"docid": "21d0c3dcd64ed588f9aa8af50c2612a9",
"text": "An ISA is a much simpler thing than I suspect you think it is. It is a wrapper or envelope, and the point of it is that HMRC does not care what happens inside the envelope, or even about extractions of funds from the envelope; they only care about insertions of funds into the envelope. It is these insertions that are limited to £15k in a tax year; what happens to the funds once they're inside the envelope is your own business. Some diagrams: Initial investment of £10k. This is an insertion into the envelope and so counts against your £15k/tax year limit. +---------ISA-------+ ----- £10k ---------> | +-------------------+ So now you have this: +---------ISA-------+ | £10k of cash | +-------------------+ Buy fund: +---------ISA-------+ | £10k of ABC | +-------------------+ Fund appreciates. This happens inside the envelope; HMRC don't care: +---------ISA-------+ | £12k of ABC | +-------------------+ Sell fund. This happens inside the envelope; HMRC don't care: +---------ISA-------+ | £12k of cash | +-------------------+ Buy another fund. This happens inside the envelope; HMRC don't care: +---------ISA-----------------+ | £10k of JKL & £2k of cash | +-----------------------------+ Fund appreciates. This happens inside the envelope; HMRC don't care: +---------ISA-----------------+ | £11k of JKL & £2k of cash | +-----------------------------+ Sell fund. This happens inside the envelope; HMRC don't care: +---------ISA-------+ | £13k of cash | +-------------------+ Withdraw funds. This is an extraction from the envelope; HMRC don't care. +---------ISA-------+ <---- £13k --------- | +-------------------+ No capital gains liability, you don't even have to put this on your tax return (if applicable) - your £10k became £13k inside an ISA envelope, so HMRC don't care. Note however that for the rest of that tax year, the most you can insert into an ISA would now be £5k: +---------ISA-------+ ----- £5k ---------> | +-------------------+ even though the ISA is empty. This is because the limit is to the total inserted during the year.",
"title": ""
}
] |
fiqa
|
306c122d7389f6d1af35fe0b2a2182eb
|
What to do if the stock you brought are stopped trading
|
[
{
"docid": "f03383a88a8140d54337e9b3816d3390",
"text": "\"The Indian regulator (SEBI) has banned trading in 300 shell companies that it views as being \"\"Shady\"\", including VB Industries. According to Money Control (.com): all these shady companies have started to rally and there was a complaint to SEBI that investors are getting SMSs from various brokerage firms to invest in them This suggests evidence of \"\"pump and dump\"\" style stock promotion. On the plus side, the SEBI will permit trading in these securities once a month : Trading in these securities shall be permitted once a month (First Monday of the month). Further, any upward price movement in these securities shall not be permitted beyond the last traded price and additional surveillance deposit of 200 percent of trade value shall be collected form the Buyers which shall be retained with Exchanges for a period of five months. This will give you an opportunity to exit your position, however, finding a buyer may be a problem and because of the severe restrictions placed on trading, any bid prices in the market are going to be a fraction of the last trade price.\"",
"title": ""
}
] |
[
{
"docid": "69003ef4b8e5329aecf0172a01c19054",
"text": "Although this is possible with many brokers, it's not advisable. In many cases you may end up with both trades executed at the same time. This is because during the opening, the stock might spike up or down heavily, bid/ask spread widens, and both of your orders would get picked up, resulting in an instant loss. Your best bet is to place the stop manually sometime after you get filled.",
"title": ""
},
{
"docid": "be25c00709dc2f9ad36703697f9aa7c0",
"text": "The volume required to significantly move the price of a security depends completely on the orderbook for that particular security. There are a variety of different reasons and time periods that a security can be halted, this will depend a bit on which exchange you're dealing with. This link might help with the halt aspect of your question: https://en.wikipedia.org/wiki/Trading_halt",
"title": ""
},
{
"docid": "1cd844e8421c007f49bdb04b4c440583",
"text": "Have the reasons you originally purchased the stock changed? Is the company still sound? Does the company have a new competitor? Has the company changed the way they operate? If the company is the same, except for stock price, why would you change your mind on the company now? ESPECIALLY if the company has not changed, -- but only other people's PERCEPTION of the company, then your original reasons for buying it are still valid. In fact, if you are not a day-trader, then this COMPANY JUST WENT ON SALE and you should buy more. If you are a day trader, then you do care about the herd's perception of value (not true value) and you should sell. DAY TRADER = SELL BUY AND HOLD (WITH INTELLIGENT RESEARCH) = BUY MORE",
"title": ""
},
{
"docid": "9443fc7e998ed1319ccfc06ef4babaf3",
"text": "\"The question mentions a trailing stop. A trailing stop is a type of stop loss order. It allows you to protect your profit on the stock, while \"\"keeping you in the stock\"\". A trailing stop is specified as a percentage of market price e.g. you might want to set a trailing stop at 5%, or 10% below the market price. A trailing stop goes up along with the market price, but if the market price drops it doesn't move down too. The idea is that it is there to \"\"catch\"\" your profit, if the market suddenly moves quickly against you. There is a nice explanation of how that works in the section titled Trailing Stops here. (The URL for the page, \"\"Tailing Stops\"\" is misleading, and a typo, I suspect.)\"",
"title": ""
},
{
"docid": "a9c3a5aaf5df6ca43a2eef88687560f3",
"text": "\"It would be useful to forget about the initial price that you invested - that loss happened, it's over and irreversible, it's a sunk cost; and anchoring on it would only cause you to make worse decisions. Getting \"\"back\"\" from a loss is done exactly the same as growing an investment that didn't have such a loss. You have x units of stock that's currently priced $46.5 - that is your blank slate; you need to decide wether you should hold that stock (i.e., if $46.5 is undervalued and likely to increase) or it's likely to fall further and you should sell it. The decision you make should be exactly the same as if you'd bought it a bit earlier for $40.\"",
"title": ""
},
{
"docid": "a449ebd5cbf311c0f30e78020ee78c18",
"text": "Will there be a scenario in which I want to sell, but nobody wants to buy from me and I'm stuck at the brokerage website? Similarly, if nobody wants to sell their stocks, I will not be able to buy at all? Yes, that is entirely possible.",
"title": ""
},
{
"docid": "c3b9fd5ea693ffc56f91103de3f20618",
"text": "You would place a stop buy market order at 43.90 with a stop loss market order at 40.99 and a stop limit profit order at 49.99. This should all be entered when you place your initial buy stop order. The buy stop order will triger and be traded once the price reaches 43.90or above. At this point both the stop loss market order and the stop limit profit order will become active. If either of them is triggered and traded the other order will be cancelled automatically.",
"title": ""
},
{
"docid": "4746c7f0338bf0b473f7030d7e6dc408",
"text": "You can obtain a stocklist if you file a lawsuit as a shareholder against the company demanding that you receive the list. It's called an inspection case. The company then has to go to Cede and/or the Depository Trust Company who then compiles the NOBO COBO list of beneficiary stockholders. SEC.gov gives you a very limited list of people who have had to file 13g or 13d or similar filings. These are large holders. To get the list of ALL stockholders you have to go through Cede.",
"title": ""
},
{
"docid": "6af4d9385781695638a8d3a554c95299",
"text": "Double check with your broker, but if a series isn't open yet for trading, you can't trade it. If there is a series trading without open interest (rare), simply work your open, as options are created at trade. If you have enough money, do this https://money.stackexchange.com/questions/21839/list-of-cflex-2-0-brokers",
"title": ""
},
{
"docid": "87a5f0d18bc2cb7e78e815104cdd5230",
"text": "TD will only sell the stock for you if there's a buyer. There was a buyer, for at least one transaction of at least one stock at 96.66. But who said there were more? Obviously, the stocks later fell, i.e.: there were not that many buyers as there were sellers. What I'm saying is that once the stock passed/reached the limit, the order becomes an active order. But it doesn't become the only active order. It is added to the list, and to the bottom of that list. Obviously, in this case, there were not enough buyers to go through the whole list and get to your order, and since it was a limit order - it would only execute with the limit price you put. Once the price went down you got out of luck. That said, there could of course be a possibility of a system failure. But given the story of the market behavior - it just looks like you miscalculated and lost on a bet.",
"title": ""
},
{
"docid": "72f8406a31741459ff9869a0c5d52123",
"text": "\"Does your job give you access to \"\"confidential information\"\", such that you can only buy or sell shares in the company during certain windows? Employees with access to company financial data, resource planning databases, or customer databases are often only allowed to trade in company securities (or derivatives thereof) during certain \"\"windows\"\" a few days after the company releases its quarterly earnings reports. Even those windows can be cancelled if a major event is about to be announced. These windows are designed to prevent the appearance of insider trading, which is a serious crime in the United States. Is there a minimum time that you would need to hold the stock, before you are allowed to sell it? Do you have confidence that the stock would retain most of its value, long enough that your profits are long-term capital gains instead of short-term capital gains? What happens to your stock if you lose your job, retire, or go to another company? Does your company's stock price seem to be inflated by any of these factors: If any of these nine warning flags are the case, I would think carefully before investing. If I had a basic emergency fund set aside and none of the nine warning flags are present, or if I had a solid emergency fund and the company seemed likely to continue to justify its stock price for several years, I would seriously consider taking full advantage of the stock purchase plan. I would not invest more money than I could afford to lose. At first, I would cash out my profits quickly (either as quickly as allowed, or as quickly as lets me minimize my capital gains taxes). I would reinvest in more shares, until I could afford to buy as many shares as the company would allow me to buy at the discount. In the long-run, I would avoid having more than one-third of my net worth in any single investment. (E.g., company stock, home equity, bonds in general, et cetera.)\"",
"title": ""
},
{
"docid": "8767fd8487c7fbf1fe4d78d52a38411b",
"text": "\"My broker offers the following types of sell orders: I have a strategy to sell-half of my position once the accrued value has doubled. I take into account market price, dividends, and taxes (Both LTgain and taxes on dividends). Once the market price exceeds the magic trigger price by 10%, I enter a \"\"trailing stop %\"\" order at 10%. Ideally what happens is that the stock keeps going up, and the trailing stop % keeps following it, and that goes on long enough that accrued dividends end up paying for the stock. What happens in reality is that the stock goes up some, goes down some, then the order gets cancelled because the company announces dividends or something dumb like that. THEN I get into trouble trying to figure out how to re-enter the order, maintaining the unrealized gain in the history of the trailing stop order. I screwed up and entered the wrong type of order once and sold stock I didn't want to. Lets look at an example. a number of years ago, I bought some JNJ -- a hundred shares at 62.18. - Accumulated dividends are 2127.75 - My spreadsheet tells me the \"\"double price\"\" is 104.54, and double + 10% is 116.16. - So a while ago, JNJ exceeded 118.23, and I entered a Trailing Stop 10% order to sell 50 shares of JNJ. The activation price was 106.41. - since then, the price has gone up and down... it reached a high of 126.07, setting the activation price at 113.45. - Then, JNJ announced a dividend, and my broker cancelled the trailing stop order. I've re-entered a \"\"Stop market\"\" order at 113.45. I've also entered an alert for $126.07 -- if the alert gets triggered, I'll cancel the Market Stop and enter a new trailing stop.\"",
"title": ""
},
{
"docid": "566c46d1e90f3c2b4f6b483efe05b910",
"text": "If the stock starts to go down DO NOT SELL!! My reasoning for this is because, when you talk about the stock market, you haven't actually lost any money until you sell the stock. So if you sell it lower than you bought it, you loose money. BUT if you wait for the stock to go back up again, you will have made money.",
"title": ""
},
{
"docid": "f801652ae312cec1b606290fab1e0261",
"text": "+1 to YosefWeiner. Let me add: Legally, technically, or at least theoretically, when you buy stock through a broker, you own the stock, not the broker. The broker is just holding it for you. If the broker goes bankrupt, that has nothing to do with the value of your stock. That said, if the broker fails to transfer your shares to another broker before ceasing operation, it could be difficult to get your assets. Suppose you take your shoes to a shoe repair shop. Before you can pick them up, the shop goes bankrupt. The shoes are still rightfully yours. If the shop owner was a nice guy he would have called you and told you to pick up your shoes before he closed the shop. But if he didn't, you may have to go through legal gyrations to get your shoes back. If as his business failed the shop owner quit caring and got sloppy about his records, you might have to prove that those shoes are yours and not someone else's, etc.",
"title": ""
},
{
"docid": "abf23d001d2d137b8fb1603b8748935e",
"text": "I'm a bit out of my element here, but my guess is the right way to think about this is: knowing what you do now about the underlying company (NZT), pretend they had never offered ADR shares. Would you buy their foreign listed shares today? Another way of looking at it would be: would you know how to sell the foreign-listed shares today if you had to do so in an emergency? If not, I'd also push gently in the direction of selling sooner than later.",
"title": ""
}
] |
fiqa
|
0af2a87ef5142a009a6e9bb33b0b4f10
|
How late is Roth (rather than pretax) still likely to help?
|
[
{
"docid": "96cebd4831ce216b7c00f7a039a8691c",
"text": "My simplest approach is to suggest that people go Roth when in the 15% bracket, and use pre-tax to avoid 25%. I outlined that strategy in my article The 15% solution. The monkey wrench that gets thrown in to this is the distortion of the other smooth marginal tax curve caused by the taxation of social security. For those who can afford to, it makes the case to lean toward Roth as much as possible. I'd suggest always depositing pretax, and using conversions to better control the process. Two major benefits to this. It's less a question of too late than of what strategy to use.",
"title": ""
},
{
"docid": "8a62de7c839adaec6cb463239c9d06ab",
"text": "Years before retirement isn't related at all to the Pretax IRA/Roth IRA decision, except insomuch as income typically trends up over time for most people. If tax rates were constant (both at income levels and over time!), Roth and Pretax would be identical. Say you designate 100k for contribution, 20% tax rate. 80k contributed in Roth vs. 100k contributed in Pretax, then 20% tax rate on withdrawal, ends up with the same amount in your bank account after withdrawal - you're just moving the 20% tax grab from one time to another. If you choose Roth, it's either because you like some of the flexibility (like taking out contributions after 5 years), or because you are currently paying a lower marginal rate than you expect you will be in the future - either because you aren't making all that much this year, or because you are expecting rates to rise due to political changes in our society. Best is likely a diversified approach - some of your money pretax, some posttax. At least some should be in a pretax IRA, because you get some tax-free money each year thanks to the personal exemption. If you're working off of 100% post-tax, you are paying more tax than you ought unless you're getting enough Social Security to cover the whole 0% bucket (and probably the 10% bucket, also). So for example, you're thinking you want 70k a year. Assuming single and ignoring social security (as it's a very complicated issue - Joe Taxpayer has a nice blog article regarding it that he links to in his answer), you get $10k or so tax-free, then another $9k or so at 10% - almost certainly lower than what you pay now. So you could aim to get $19k out of your pre-tax IRA, then, and 51k out of your post-tax IRA, meaning you only pay $900 in taxes on your income. Of course, if you're in the 25% bucket now, you may want to use more pretax, since you could then take that out - all the way to around $50k (standard exemption + $40k or so point where 25% hits). But on the other hand, Social Security would probably change that equation back to using primarily Roth if you're getting a decent Social Security check.",
"title": ""
}
] |
[
{
"docid": "71146df668f12b055a8d5912ca96a59b",
"text": "It depends on the relative rates and relative risk. Ignore the deduction. You want to compare the rates of the investment and the mortgage, either both after-tax or both before-tax. Your mortgage costs you 5% (a bit less after-tax), and prepayments effectively yield a guaranteed 5% return. If you can earn more than that in your IRA with a risk-free investment, invest. If you can earn more than that in your IRA while taking on a degree of risk that you are comfortable with, invest. If not, pay down your mortgage. See this article: Mortgage Prepayment as Investment: For example, the borrower with a 6% mortgage who has excess cash flow would do well to use it to pay down the mortgage balance if the alternative is investment in assets that yield 2%. But if two years down the road the same assets yield 7%, the borrower can stop allocating excess cash flow to the mortgage and start accumulating financial assets. Note that he's not comparing the relative risk of the investments. Paying down your mortgage has a guaranteed return. You're talking about CDs, which are low risk, so your comparison is simple. If your alternative investment is stocks, then there's an element of risk that it won't earn enough to outpace the mortgage cost. Update: hopefully this example makes it clearer: For example, lets compare investing $100,000 in repayment of a 6% mortgage with investing it in a fund that pays 5% before-tax, and taxes are deferred for 10 years. For the mortgage, we enter 10 years for the period, 3.6% (if that is the applicable rate) for the after tax return, $100,000 as the present value, and we obtain a future value of $142,429. For the alternative investment, we do the same except we enter 5% as the return, and we get a future value of $162,889. However, taxes are now due on the $62,889 of interest, which reduces the future value to $137,734. The mortgage repayment does a little better. So if your marginal tax rate is 30%, you have $10k extra cash to do something with right now, mortgage rate is 5%, IRA CD APY is 1%, and assuming retirement in 30 years: If you want to plug it into a spreadsheet, the formula to use is (substitute your own values): (Note the minus sign before the cash amount.) Make sure you use after tax rates for both so that you're comparing apples to apples. Then multiply your IRA amount by (1-taxrate) to get the value after you pay future taxes on IRA withdrawals.",
"title": ""
},
{
"docid": "d0e622644fac5c51c872683f0cc8e444",
"text": "Also, consider the possibility of early withdrawal penalties. Regular 401k early withdrawal (for non-qualified reasons) gets you a 10% penalty, in addition to tax, on the entire amount, even if you're just withdrawing your own contributions. Withdrawing from a Roth 401k can potentially mean less penalties (if it's been in place 5 years, and subject to a bunch of fine print of course).",
"title": ""
},
{
"docid": "311624613cc87899692c9eddabdeb721",
"text": "Fast Forward 40 - 45 years, you're 70.5. You must take out ~5% from your Traditional IRA. If that was a Roth, you take out as much as you need (within reason) when you need it with zero tax consequences. I don't know (and don't care) whether they'll change the Roth tax exclusion in 40 years. It's almost guaranteed that the rate on the Roth will be less than the regular income status of a Traditional IRA. Most likely we'll have a value added tax (sales tax) then. Possibly even a Wealth Tax. The former doesn't care where the money comes from (source neutral) the latter means you loose more (probably) of that 2.2 MM than the 1.7. Finally, if you're planning on 10%/yr over 40 yrs, good luck! But that's crazy wild speculation and you're likely to be disappointed. If you're that good at picking winners, then why stop at 10%? Money makes money. Your rate of return should increase as your net worth increases. So, you should be able to pick better opportunities with 2.2 million than with a paltry 1.65 MM.",
"title": ""
},
{
"docid": "810eceab7edb6216ea4133d029874089",
"text": "\"I humbly disagree with #2. the use of Roth or pre-tax IRA depends on your circumstance. With no match in the 401(k), I'd start with an IRA. If you have more than $5k to put in, then some 401(k) would be needed. Edit - to add detail on Roth decision. I was invited to write a guest article \"\"Roth IRAs and your retirement income\"\" some time ago. In it, I discuss the large amount of pretax savings it takes to generate the income to put you in a high bracket in retirement. This analysis leads me to believe the risk of paying tax now only to find tHat you are in a lower bracket upon retiring is far greater than the opposite. I think if there were any generalization (I hate rules of thumb, they are utterly pick-apartable) to be made, it's that if you are in the 15% bracket or lower, go Roth. As your income puts you into 25%, go pretax. I believe this would apply to the bulk of investors, 80%+.\"",
"title": ""
},
{
"docid": "03a994a5087593a76b53c9ac7b8de476",
"text": "\"(I'm expanding on what @BrenBarn had added to his answer.) The assumption of \"\"same tax bracket in retirement\"\" is convenient, but simplistic. If you are in, for instance, the second-lowest bracket now, and happen to remain in the second-lowest bracket for retirement, then Roth and traditional account options may seem equal — and your math backs that up, on the surface — but that's making an implicit assumption that tax rates will be constant. Yet, tax brackets and rates can change. And they do. The proof. i.e. Your \"\"15% bracket\"\" could become, say, the \"\"17% bracket\"\" (or, perhaps, the \"\"13% bracket\"\") All the while you might remain in the second-lowest bracket. So, given the potential for fluctuating tax rates, it's easy to see that there can be a case where a traditional tax-deferred account can yield more after-tax income than a Roth post-tax account, even if you remain in the same bracket: When your tax bracket's tax rate declines. So, don't just consider what bracket you expect to be in. Consider also whether you expect tax rates to go up, down, or remain the same. For twenty-something young folk, retirement is a long way away (~40 years) and I think in that time frame it is far more likely that the tax brackets won't have the same underlying tax rates that they have now. Of course, we can't know for sure which direction tax rates will head in, but an educated guess can help. Is your government deep in debt, or flush with extra cash? On the other hand, if you don't feel comfortable making predictions, much better than simply assuming \"\"brackets and rates will stay the same as now, so it doesn't matter\"\" is to instead hedge your bets: save some of your retirement money in a Roth-style account, and some in a traditional pre-tax account. Consider it tax diversification. See also my answer at this older but related question:\"",
"title": ""
},
{
"docid": "909eae1d15d84e2380144c2af50e1f14",
"text": "My observations is that this seems like hardly enough to kill inflation. Is he right? Or are there better ways to invest? The tax deferral part of the equation isn't what dominates regarding whether your 401k beats 30 years of inflation; it is the return on investment. If your 401k account tanks due to a prolonged market crash just as you retire, then you might have been better off stashing the money in the bank. Remember, 401k money at now + 30 years is not a guaranteed return (though many speak as though it were). There is also the question as to whether fees will eat up some of your return and whether the funds your 401k invests in are good ones. I'm uneasy with the autopilot nature of the typical 401k non-strategy; it's too much the standard thing to do in the U.S., it's too unconscious, and strikes me as Ponzi-like. It has been a winning strategy for some already, sure, and maybe it will work for the next 30-100 years or more. I just don't know. There are also changes in policy or other unknowns that 30 years will bring, so it takes faith I don't have to lock away a large chunk of my savings in something I can't touch without hassle and penalty until then. For that reason, I have contributed very little to my 403b previously, contribute nothing now (though employer does, automatically. I have no match.) and have built up a sizable cash savings, some of which may be used to start a business or buy a house with a small or no mortgage (thereby guaranteeing at least not paying mortgage interest). I am open to changing my mind about all this, but am glad I've been able to at least save a chunk to give me some options that I can exercise in the next 5-10 years if I want, instead of having to wait 25 or more.",
"title": ""
},
{
"docid": "5edca99d5d18ea6c96437d83eef4b26b",
"text": "\"The biggest and primary question is how much money you want to live on within retirement. The lower this is, the more options you have available. You will find that while initially complex, it doesn't take much planning to take complete advantage of the tax system if you are intending to retire early. Are there any other investment accounts that are geared towards retirement or long term investing and have some perk associated with them (tax deferred, tax exempt) but do not have an age restriction when money can be withdrawn? I'm going to answer this with some potential alternatives. The US tax system currently is great for people wanting to early retire. If you can save significant money you can optimize your taxes so much over your lifetime! If you retire early and have money invested in a Roth IRA or a traditional 401k, that money can't be touched without penalty until you're 55/59. (Let's ignore Roth contributions that can technically be withdrawn) Ok, the 401k myth. The \"\"I'm hosed if I put money into it since it's stuck\"\" perspective isn't true for a variety of reasons. If you retire early you get a long amount of time to take advantage of retirement accounts. One way is to primarily contribute to pretax 401k during working years. After retiring, begin converting this at a very low tax rate. You can convert money in a traditional IRA whenever you want to be Roth. You just pay your marginal tax rate which.... for an early retiree might be 0%. Then after 5 years - you now have a chunk of principle that has become Roth principle - and can be withdrawn whenever. Let's imagine you retire at 40 with 100k in your 401k (pretax). For 5 years, you convert $20k (assuming married). Because we get $20k between exemptions/deduction it means you pay $0 taxes every year while converting $20k of your pretax IRA to Roth. Or if you have kids, even more. After 5 years you now can withdraw that 20k/year 100% tax free since it has become principle. This is only a good idea when you are retired early because you are able to fill up all your \"\"free\"\" income for tax conversions. When you are working you would be paying your marginal rate. But your marginal rate in retirement is... 0%. Related thread on a forum you might enjoy. This is sometimes called a Roth pipeline. Basically: assuming you have no income while retired early you can fairly simply convert traditional IRA money into Roth principle. This is then accessible to you well before the 55/59 age but you get the full benefit of the pretax money. But let's pretend you don't want to do that. You need the money (and tax benefit!) now! How beneficial is it to do traditional 401ks? Imagine you live in a state/city where you are paying 25% marginal tax rate. If your expected marginal rate in your early retirement is 10-15% you are still better off putting money into your 401k and just paying the 10% penalty on an early withdrawal. In many cases, for high earners, this can actually still be a tax benefit overall. The point is this: just because you have to \"\"work\"\" to get money out of a 401k early does NOT mean you lose the tax benefits of it. In fact, current tax code really does let an early retiree have their cake and eat it too when it comes to the Roth/traditional 401k/IRA question. Are you limited to a generic taxable brokerage account? Currently, a huge perk for those with small incomes is that long term capital gains are taxed based on your current federal tax bracket. If your federal marginal rate is 15% or less you will pay nothing for long term capital gains, until this income pushes you into the 25% federal bracket. This might change, but right now means you can capture many capital gains without paying taxes on them. This is huge for early retirees who can manipulate income. You can have significant \"\"income\"\" and not pay taxes on it. You can also stack this with before mentioned Roth conversions. Convert traditional IRA money until you would begin owing any federal taxes, then capture long term capital gains until you would pay tax on those. Combined this can represent a huge amount of money per year. So littleadv mentioned HSAs but.. for an early retiree they can be ridiculously good. What this means is you can invest the maximum into your HSA for 10 years, let it grow 100% tax free, and save all your medical receipts/etc. Then in 10 years start withdrawing that money. While it sucks healthcare costs so much in America, you might as well take advantage of the tax opportunities to make it suck slightly less. There are many online communities dedicated to learning and optimizing their lives in order to achieve early retirement. The question you are asking can be answered superficially in the above, but for a comprehensive plan you might want other resources. Some you might enjoy:\"",
"title": ""
},
{
"docid": "348fe523b39695c11e4bb9e24392e524",
"text": "If you're getting the same total amount of money every year, then the main issue is psychological. I mean, you may find it easier to manage your money if you get it on one schedule rather than another. It's generally better to get money sooner rather than later. If you can deposit it into an account that pays interest or invest it between now and when you need it, then you'll come out ahead. But realistically, if we're talking about getting money a few days or a week or two sooner, that's not going to make much difference. If you get a paycheck just before the end of the year versus just after the end of the year, there will be tax implications. If the paycheck is delayed until January, then you don't have to pay taxes on it this year. Of course you'll have to pay the taxes next year, so that could be another case of sooner vs later. But it can also change your total taxes, because, in the US and I think many other countries, taxes are not a flat percentage, but the more you make, the higher the tax rate. So if you can move income to a year when you have less total income, that can lower your total taxes. But really, the main issue would be how it affects your budgeting. Others have discussed this so I won't repeat.",
"title": ""
},
{
"docid": "a448d95f22d848cd9953392e69d8a3c6",
"text": "If you exceed the income limit for deducting a traditional IRA (which is very low if you are covered by a 401(k) ), then your IRA options are basically limited to a Roth IRA. The Cramer person probably meant to compare 401(k) and IRA from the same pre-/post-tax-ness, so i.e. Traditional 401(k) vs. Traditional IRA, or Roth 401(k) vs. Roth IRA. Comparing a Roth investment against a Traditional investment goes into a whole other topic that only confuses what is being discussed here. So if deducting a traditional IRA is ruled out, then I don't think Cramer's advice can be as simply applied regarding a Traditional 401(k). (However, by that logic, and since most people on 401(k) have Traditional 401(k), and if you are covered by a 401(k) then you cannot deduct a Traditional IRA unless you are super low income, that would mean Cramer's advice is not applicable in most situations. So I don't really know what to think here.)",
"title": ""
},
{
"docid": "51ec965a4eec4d21850e5055c1062b74",
"text": "\"This is an excellent topic as it impacts so many in so many different ways. Here are some thoughts on how the accounts are used which is almost as important as the as calculating the income or tax. The Roth is the best bang for the buck, once you have taken full advantage of employer matched 401K. Yes, you pay taxes upfront. All income earned isn't taxed (under current tax rules). This money can be passed on to family and can continue forever. Contributions can be funded past age 70.5. Once account is active for over 5 years, contributions can be withdrawn and used (ie: house down payment, college, medical bills), without any penalties. All income earned must be left in the account to avoid penalties. For younger workers, without an employer match this is idea given the income tax savings over the longer term and they are most likely in the lowest tax bracket. The 401k is great for retirement, which is made better if employer matches contributions. This is like getting paid for retirement saving. These funds are \"\"locked\"\" up until age 59.5, with exceptions. All contributed funds and all earnings are \"\"untaxed\"\" until withdrawn. The idea here is that at the time contributions are added, you are at a higher tax rate then when you expect to withdrawn funds. Trade Accounts, investments, as stated before are the used of taxed dollars. The biggest advantage of these are the liquidity.\"",
"title": ""
},
{
"docid": "8139827df5aa181c2aa883974232b178",
"text": "Something that's come up in comments and been alluded to in answers, but not explicit as far as I can tell: Even if your marginal tax rate now were equal to your marginal tax rate in retirement, or even lower, a traditional IRA may have advantages. That's because it's your effective tax rate that matters on withdrawls. (Based on TY 2014, single person, but applies at higher numbers for other arrangements): You pay 0 taxes on the first $6200 of income, and then pay 10% on the next $9075, then 15% on $27825, then 25% on the total amount over that up to $89530, etc. As such, even if your marginal rate is 25% (say you earn $80k), your effective rate is much less: for example, $80k income, you pay taxes on $73800. That ends up being $14,600, for an effective rate in total of 17.9%. Let's say you had the same salary, $80k, from 20 to 65, and for 45 years saved up 10k a year, plus earned enough returns to pay you out $80k a year in retirement. In a Roth, you pay 25% on all $10k. In a traditional, you save that $2500 a year (because it comes off the top, the amount over $36900), and then pay 17.9% during retirement (your effective tax rate, because it's the amount in total that matters). So for Roth you had 7500*(returns), while for Traditional the correct amount isn't 10k*(returns)*0.75, but 10k*(returns)*0.821. You make the difference between .75 and .82 back even with the identical income. [Of course, if your $10k would take you down a marginal bracket, then it also has an 'effective' tax rate of something between the two rates.] Thus, Roth makes sense if you expect your effective tax rate to be higher in retirement than it is now. This is very possible, still, because for people like me with a mortgage, high property taxes, two kids, and student loans, my marginal tax rate is pretty low - even with a reasonably nice salary I still pay 15% on the stuff that's heading into my IRA. (Sadly, my employer has only a traditional 401k, but they also contribute to it without requiring a match so I won't complain too much.) Since I expect my eventual tax rate to be in that 18-20% at a minimum, I'd benefit from a Roth IRA right now. This matters more for people in the middle brackets - earning high 5 figure salaries as individuals or low 6 figure as a couple - because the big difference is relevant when a large percentage of your income is in the 15% and below brackets. If you're earning $200k, then so much of your income is taxed at 28-33% it doesn't make nearly as much of a difference, and odds are you can play various tricks when you're retiring to avoid having as high of a tax rate.",
"title": ""
},
{
"docid": "2ccd5eb1d0b5465caec02197574beaf4",
"text": "This all comes down to time: You can spend the maximum on taxes and penalties and have your money now. Or you can wait about a decade and not pay a cent in taxes or penalties. Consider (assuming no other us income and 2017 tax brackets which we know will change): Option 1 (1 year): Take all the money next year and pay the taxes and penalty: Option 2 (2 years): Spread it out to barely exceed the 10% bracket: Option 3 (6 years): Spread it out to cover your Standard Deduction each year: Option 4 (6-11 years): Same as Option 3 but via a Roth Conversion Ladder:",
"title": ""
},
{
"docid": "f08c6c36927d6dfa44a0d15516a956a5",
"text": "Why not just deposit to a Traditional IRA, and convert it to Roth? If you have pretax IRA money, you need to pay prorated tax (on what wasn't yet taxed) but that's it. It rarely makes sense to ask for a lower wage. Does your company offer a 401(k) account? To clarify, the existing Traditional IRA balance is the problem. The issue arises when you have a new deposit that otherwise isn't deductible and try to convert it. Absent that existing IRA, the immediate conversion is tax free. Now, with that IRA in place the conversion prorates some of that pretax money, and you are subject to a tax bill.",
"title": ""
},
{
"docid": "980789da5abf6464c0e7ff07ef72bc5e",
"text": "\"You have several questions in your post so I'll deal with them individually: Is taking small sums from your IRA really that detrimental? I mean as far as tax is concerned? Percentage wise, you pay the tax on the amount plus a 10% penalty, plus the opportunity cost of the gains that the money would have gotten. At 6% growth annually, in 5 years that's more than a 34% loss. There are much cheaper ways to get funds than tapping your IRA. Isn't the 10% \"\"penalty\"\" really to cover SS and the medicare tax that you did not pay before putting money into your retirement? No - you still pay SS and medicare on your gross income - 401(k) contributions just reduce how much you pay in income tax. The 10% penalty is to dissuade you from using retirement money before you retire. If I ... contributed that to my IRA before taxes (including SS and medicare tax) that money would gain 6% interest. Again, you would still pay SS and Medicare, and like you say there's no guarantee that you'll earn 6% on your money. I don't think you can pay taxes up front when making an early withdrawal from an IRA can you? This one you got right. When you file your taxes, your IRA contributions for the year are totaled up and are deducted from your gross income for tax purposes. There's no tax effect when you make the contribution. Would it not be better to contribute that $5500 to my IRA and if I didn't need it, great, let it grow but if I did need it toward the end of the year, do an early withdrawal? So what do you plan your tax withholdings against? Do you plan on keeping it there (reducing your withholdings) and pay a big tax bill (plus possibly penalties) if you \"\"need it\"\"? Or do you plan to take it out and have a big refund when you file your taxes? You might be better off saving that up in a savings account during the year, and if at the end of the year you didn't use it, then make an IRA contribution, which will lower the taxes you pay. Don't use your IRA as a \"\"hopeful\"\" savings account. So if I needed to withdrawal $5500 and I am in the 25% tax bracket, I would owe the government $1925 in taxes+ 10% penalty. So if I withdrew $7425 to cover the tax and penalty, I would then be taxed $2600 (an additional $675). Sounds like a cat chasing it's tail trying to cover the tax. Yes if you take a withdrawal to pay the taxes. If you pay the tax with non-retirement money then the cycle stops. how can I make a withdrawal from an IRA without having to pay tax on tax. Pay cash for the tax and penalty rather then taking another withdrawal to pay the tax. If you can't afford the tax and penalty in cash, then don't withdraw at all. based on this year's W-2 form, I had an accountant do my taxes and the $27K loan was added as earned income then in another block there was the $2700 amount for the penalty. So you paid 25% in income tax for the earned income and an additional 10% penalty. So in your case it was a 35% overall \"\"tax\"\" instead of the 40% rule of thumb (since many people are in 28% and 35% tax brackets) The bottom line is it sounds like you are completely unorganized and have absolutely no margin to cover any unexpected expenses. I would stop contributing to retirement today until you can get control of your spending, get on a budget, and stop trying to use your IRA as a piggy bank. If you don't plan on using the money for retirement then don't put it in an IRA. Stop borrowing from it and getting into further binds that force you to make bad financial decisions. You don't go into detail about any other aspects (mortgage? car loans? consumer debt?) to even begin to know where the real problem is. So you need to write everything down that you own and you owe, write out your monthly expenses and income, and figure out what you can cut if needed in order to build up some cash savings. Until then, you're driving across country in a car with no tires, worrying about which highway will give you the best gas mileage.\"",
"title": ""
},
{
"docid": "2eb5e5bdd4912cf03a38d7a6987476bd",
"text": "\"Your real question, \"\"why is this not discussed more?\"\" is intriguing. I think the media are doing a better job bringing these things into the topics they like to ponder, just not enough, yet. You actually produced the answer to How are long-term capital gains taxed if the gain pushes income into a new tax bracket? so you understand how it works. I am a fan of bracket topping. e.g. A young couple should try to top off their 15% bracket by staying with Roth but then using pretax IRA/401(k) to not creep into 25% bracket. For this discussion, 2013 numbers, a blank return (i.e. no schedule A, no other income) shows a couple with a gross $92,500 being at the 15%/25% line. It happens that $20K is exactly the sum of their standard deduction, and 2 exemptions. The last clean Distribution of Income Data is from 2006, but since wages haven't exploded and inflation has been low, it's fair to say that from the $92,000 representing the top 20% of earners, it won't have many more than top 25% today. So, yes, this is a great opportunity for most people. Any married couple with under that $92,500 figure can use this strategy to exploit your observation, and step up their basis each year. To littleadv objection - I imagine an older couple grossing $75K, by selling stock with $10K in LT gains just getting rid of the potential 15% bill at retirement. No trading cost if a mutual fund, just $20 or so if stocks. The more important point, not yet mentioned - even in a low cost 401(k), a lifetime of savings results in all gains being turned in ordinary income. And the case is strong for 'deposit to the match but no no more' as this strategy would let 2/3 of us pay zero on those gains. (To try to address the rest of your questions a bit - the strategy applies to a small sliver of people. 25% have income too high, the bottom 50% or so, have virtually no savings. Much of the 25% that remain have savings in tax sheltered accounts. With the 2013 401(k) limit of $17,500, a 40 year old couple can save $35,000. This easily suck in most of one's long term retirement savings. We can discuss demographics all day, but I think this addresses your question.) If you add any comments, I'll probably address them via edits, avoiding a long dialog below.\"",
"title": ""
}
] |
fiqa
|
6184bfa70165732b2cf52d968d480452
|
How can I find/compare custodians for my HSA in the United States?
|
[
{
"docid": "e0061a162934e232124915313932503a",
"text": "In general, things to look for are: Things to look out for: I'd recommend two places: I'd recommend reading up on HSA's in this related question here.",
"title": ""
},
{
"docid": "cf252386667d75692b8ba238d448b4a4",
"text": "\"The account I have found that works best as a HSA is Alliant Credit Union. They have fee-free HSA (no fees for almost all types of transactions or monthly fees) and a fairly decent online banking website. I've been with them for about 5 years now without trouble. FYI - They are a credit union not a bank so you do have to make a small $10 donation to one of their charities to become \"\"eligible\"\" for opening the account.\"",
"title": ""
}
] |
[
{
"docid": "7a42529b88b2ac529ec2f6a1d17199d1",
"text": "Yes on the August expenses, No on the April; the expenses must have happened after the HSA was opened. Also, note that you're limited to (in 2015) $3350 of deposits to the HSA in a single year, so you can only put $2350 more into the HSA. The IRS form for HSAs looks something like this: 1) How much money did you take from your HSA? 2) How much were your qualified medical expenses? 3) If (1) > (2), give us a bunch of money.",
"title": ""
},
{
"docid": "2295e2e52fd7aa8c40e93939275455eb",
"text": "But they aren't, your number fails to compare the US to any other countries in 2013. My numbers do compare the rate in the US to other countries over four different years. Edit: also, it is funny that the rate cost rise in the US has increased since we passed the afordable care act... hmmm....",
"title": ""
},
{
"docid": "103e788a721ad7bd848850ab6c53da9d",
"text": "You can't roll her HSA account into yours, but you can roll her HSA account into another HSA account that is hers. A $5 per month fee for an HSA account is ridiculous. Find another account that has no fees, and move the money there. I suggest talking to your local credit union.",
"title": ""
},
{
"docid": "a218b268aee293bf7feabf28e3b83c0f",
"text": "I fell into a similar situation as you. I spent a lot of time trying to understand this, and the instructions leave a lot to be desired. What follows is my ultimate decisions, and my rationale. My taxes have already been filed, so I will let you know if I get audited! 1.) So in cases like this I try to understand the intent. In this case section III is trying to understand if pre-tax money was added to your HSA that you were not entitled too. As you describe, this does not apply to you. I would think you should be ok not including section III (I didn't.) HOWEVER, I am not a tax-lawyer or even a lawyer! 2.) I do not believe these are medical distributions From the 8889 doc.... Qualified HSA distribution. This is a distribution from a health flexible spending arrangement (FSA) or health reimbursement arrangement (HRA) that is contributed by your employer directly to your HSA. This is a one-time distribution from any of these arrangements. The distribution is treated as a rollover contribution to the HSA and is subject to the testing period rules shown below. See Pub. 969 for more information. So I don't think you have anything to report here. 3.) As you have no excess this line can just be zero. 4.) From the 8889 doc This is a distribution from your traditional IRA or Roth IRA to your HSA in a direct trustee-to-trustee transfer. Again, I don't think this applies to you so you can enter zero. 5.) This one is the easiest. You can always get this money tax free if you use it for qualified medical expenses. From the 8889 Distributions from an HSA used exclusively to pay qualified medical expenses of the account beneficiary, spouse, or dependents are excludable from gross income. (See the line 15 instructions for information on medical expenses of dependents not claimed on your return.) You can receive distributions from an HSA even if you are not currently eligible to have contributions made to the HSA. However, any part of a distribution not used to pay qualified medical expenses is includible in gross income and is subject to an additional 20% tax unless an exception applies. I hope this helps!",
"title": ""
},
{
"docid": "24bb18e4837526c4fedf26ad190601c7",
"text": "Yup, if he/she is talking about a broker/dealer, but if he's talking to an RIA and is trying to find out who the custodian is then he won't have a statement yet. I don't think he has opened the account yet, but I'm not sure and could be totally misunderstanding the question.",
"title": ""
},
{
"docid": "57e1115d5f30efd13813bb51c89ac504",
"text": "Hey, I was thinking back to that X-Ray you got done at the hospital, that you said was cheaper than 90% of the population. I am wondering if maybe that hospital wasn't one of the many that qualify for DSH reimbursement payments. What could have happened is that they saw that you are uninsured, and made a decision that they would only charge you for the portion that they didn't think Medicare / Medicaid would reimburse. If that happened, it wouldn't even show up on your credit report, as the hospital is the one that would file a credit claim. Likely, if they have to go through this a lot, they wouldn't even waste time filing a credit claim, they would just go after a reimbursement through Medicare / Medicaid. And thus, to you, it would just look like a very small bill, but in reality it would only represent a smaller portion of the true bill. I would also wonder, if they do a lot of these, if they aren't also one of the hospitals that article I linked to showed was super-inflating the prices of uninsured in the hopes of getting a larger portion reimbursed.",
"title": ""
},
{
"docid": "77de1f0828136343b16e6cd31563932d",
"text": "First, as noted in the comments, you need to pay attention to your network providers. If you are unable to pay exorbitant prices out of pocket, then find an in-network medical provider. if you are unhappy with the in-network provider list (e.g. too distant or not specialists), then discuss switching to another plan or insurer with your employer or broker. Second, many providers will have out of pocket or uninsured price lists, often seen in outdated formats or disused binders. Since you have asked for price lists and not been provided one, I would pursue it with the practice manager (or equivalent, or else a doctor) and ask if they have one. It's possible that the clinic has an out of pocket price list but the front line staff is unaware of it and was never trained on it. Third, if you efforts to secure a price list fail, and you are especially committed to this specific provider, then I would consider engaging in a friendly by direct negotiation with the practice manager or other responsible person. Person they will be amenable to creating a list of prices (if you are particularly proactive and aggressive, you could offer to find out of pocket price lists from other clinics nearby). You could also flat out ask them to charge you a certain fee for office visits (if you do this, try to get some sort of offer or agreed price list in writing). Most medical practices are uncomfortable asking patients for money, so that may mean flat refusal to negotiate but it may also mean surprising willingness to work with you. This route is highly unpredictable before you go down it, and it's dependent on all sorts of things like the ownership structure, business model, and the personalities of the key people there. The easiest answer is to switch clinics. This one sounds very unfriendly to HSA patients.",
"title": ""
},
{
"docid": "30e8471d59412577307653b7213b0f94",
"text": "I compared it to NJ, NY and various other states through a database that escapes me now. It's an independent database that was partially funded by the Bill and Melinda Gates foundation and I cannot find it now. I also went through the arbitration opinion between CPS and the union, as another datapoint. I'm sorry. I will try to find it and get back to you.",
"title": ""
},
{
"docid": "394e2c739f4870cd08159d90823caba2",
"text": "\"I had an HSA for two or three years. I found very routinely that my insurance company had negotiated rates with in-network providers. So as I never hit the deductible, I always had to pay 100% of the negotiated rate, but it was still much less than the providers general rate. Sometimes dramatically so. Like I had some blood and urine tests done and the general rate was $450 but the negotiated rate was only $40. I had laser eye surgery and the general rate was something like $1500 but the negotiated rate was more like $500. Et cetera. Other times it was the same or trivially different, like routine office visits it made no difference. I found that I could call the insurance company and ask for the negotiated rate and they would tell me. When I asked the doctor or the hospital, they either couldn't tell me or they wouldn't. It's possible that the doctor's office doesn't really know what rates they've agreed to, they might have just signed some contract with the insurance company that says, yes, we'll accept whatever you give us. But either way, I had to go to the insurance company to find out. You'd think they'd just publish the list on a web site or something. After all, it's to the insurance company's advantage if you go to the cheapest provider. With a \"\"regular\"\" non-HSA plan, they're share of the total is less. Even with an HSA plan if you go to a cheaper provider you are less likely to hit the deductible. Yes, medical care in the U.S. is rather bizarre in that providers routinely expect you to commit to paying for their services before they will tell you the price. Can you imagine any other industry working this way? Can you imagine buying a car and the dealer saying, \"\"I have no idea what this car costs. If you like it, great, take it and drive it home, and in a few weeks we'll send you a bill. And of course whatever amount we put on that bill you are legally obligated to pay, but we refuse to tell you what that amount will be.\"\" The American Medical Association used to have a policy that they considered it \"\"unethical\"\" for doctors to tell patients the price of treatment in advance. I don't know if they still do.\"",
"title": ""
},
{
"docid": "cd55f90bd71c1fc6fbf7018fd284c21f",
"text": "\"Uniform Transfer to Minors Act (UTMA) and Uniform Gift to Minors Act (UGMA) accounts in the United States are accounts that belong to your child, but you can deposit money into. When the child attains his/her majority, the money becomes theirs to spend however they wish. Prior to attaining their majority, a custodian must sign off on withdrawals. Now, they are not foolproof; legally, you can withdraw money if it is spent on the child's behalf, so that can be gamed. What you can do to protect against that is to make another person the custodian (or, perhaps make them joint custodians with yourself, requiring both signatures for withdrawals). UTMA/UGMA accounts do not have to be bank savings accounts; for example, both of my children have accounts at Vanguard which are effectively their college savings accounts. They're invested in various ETFs and similar kinds of investments; you're welcome to choose from a wide variety of options depending on risk tolerance. Typically these accounts have relatively small fees, particularly if you have a reasonable minimum balance (I think USD$10k is a common minimum for avoiding larger fees). If you are looking for something even more secure than a UGMA or UTMA account, you can set up a trust. These have several major differences over the UGMA/UTMA accounts: Some of course consider the second point an advantage, some a disadvantage - we (and Grandma) prefer to let our children make their own choices re: college, while others may not prefer that. Also worth noting as a difference - and concern to think about - in these two. A UGMA or UTMA account that generates income may have taxable events - interest or dividend income. If that's over a relatively low threshhold, about $1050 this year, those earnings will be taxed (on the child's own tax return). If it's over $2100 (this year), those earnings will be taxed at the parents' tax rate (\"\"kiddie tax\"\"). Trusts are slightly different; trusts themselves are taxed, and have their own tax returns. If you do set one of those up, the lawyer who helps you do so should inform you of the tax implications and either hook you up with an accountant or point you to resources to handle the taxes yourself.\"",
"title": ""
},
{
"docid": "e2a2fe0109c08c64a110380f5b02751d",
"text": "Much of this is incorrect. Aetna owns Payflex for starters, and it's your EMPLOYER who decides which banks and brokers to offer, not Payflex. An HSA is a checking account with an investment account option after a minimum balance is met. A majority of U.S. employers only OFFER an HSA option but don't contribute a penny, so you're lucky you get anything. The easy solution is just keep the money that is sent to your HSA checking account in your checking account, and once a year roll it over into a different bank's HSA. The vast majority of banks offer HSAs that have no ties to a particular broker (i.e. Citibank, PNC, Chase). I have all my HSA funds in HSA Bank which is online but services lots of employers. Not true that most payroll deductions or employer contributions go to a single HSA custodian (bank). They might offer a single bank that either contracts with an investment provider or lets you invest anywhere. But most employers making contributions are large or mid-market employers offering multiple banks, and that trend is growing fast because of defined contribution, private exchanges and vendor product redesigns. Basically, nobody likes having a second bank account for their HSA when their home bank offers one.",
"title": ""
},
{
"docid": "7e11e474ced9934a0a01baefc588fd9f",
"text": "It is my understanding that the money in the HSA is yours to keep forever, even if you leave the country. When you leave the country and no longer have an HSA-eligible High Deductible Health Plan, you will no longer be able to contribute new money to your HSA. However, you can still spend the money on eligible medical expenses, even if these expenses are outside the U.S. However, there are a few caveats: The HSA money will remain in a U.S. HSA bank account. You won't be able to transfer the entire account to a new account in your home country without paying taxes and penalty. Therefore, you need to have a mechanism for accessing and transferring the money from abroad, so that you can reimburse yourself as you have medical expenses, until the HSA account is empty. Even after you leave the U.S., as long as you have the HSA in place, you will need to file a U.S. tax return (form 1040NR) in any year that you have an HSA distribution. If you decide to take the money out without medical expenses, you will need to pay income tax on the money plus a 20% penalty. See How do I withdraw all money from my HSA account as a non-resident? for more information.",
"title": ""
},
{
"docid": "e1067b2eafc8a402c2c1389c22c2f781",
"text": "Ironically, anyone can say anything, but it doesn't make it true. In normal times, the IRS can audit you for 3 years, or up to 6 for certain cases of fraud - From the IRS site - How far back can the IRS go to audit my return? Generally, the IRS can include returns filed within the last three years in an audit. If we identify a substantial error, we may add additional years. We usually don’t go back more than the last six years. HSA spending is reported each year, just like any Schedule A deductions. Each year, I have my charitable receipts, and they are not sent in. They are there in case of audit. I don't need to save them forever, nor does one need their medical bills forever. 3 years. 6 if you wish to be paranoid. The EOBs should be enough. The HSA is unique in that you deposit pretax dollars (like a traditional IRA or 401(k)) yet withdrawals for qualified expenses come out tax free (like a Roth). In my opinion, as long as your medical plan qualifies you for an HSA, I'd maximize its use. The older you get, the more bills you'll have, and at some point, you'll be grateful to your younger self that you did this.",
"title": ""
},
{
"docid": "9fa722d5b542e019e1cfa588bc41bc6b",
"text": "You can open an HSA account with any financial institution that you like, and roll over the money from your current account into the new one. Since you are no longer in a High Deductible Health Plan, you can't contribute any new money into an HSA, but you can still spend the money in your HSA on eligible medical expenses, until it is gone. There are lots of things that you can spend HSA money on, so there is no need to cash out and take on taxes and penalties. Yes, there are HSA accounts that don't charge ongoing maintenance fees. Check with a local credit union; they usually have no-fee HSA accounts.",
"title": ""
},
{
"docid": "c6ec4c6e33b1f072622f1c14cf686071",
"text": "Paytrust seems to be the only game in town. We've changed banks several times over the last 15 years and I can tell you that using a bank's bill pay service locks you in, big time. I loved paytrust because I could make one change if we changed banks. If you're using a bank directly for your bills, the ides of recreating your payee list is daunting.",
"title": ""
}
] |
fiqa
|
e5a63c4a98f3d11704685ea952d39daf
|
Is the interest on money borrowed on margin in/for an RRSP considered tax deductible?
|
[
{
"docid": "8c25a3bed4451bce533aa676d2a3bc74",
"text": "I believe your question is based on a false premise. First, no broker, that I know of, provides an RRSP account that is a margin account. RRSP accounts follow cash settlement rules. If you don't have the cash available, you can't buy a stock. You can't borrow money from your broker within your RRSP. If you want to borrow money to invest in your RRSP, you must borrow outside from another source, and make a contribution to your RRSP. And, if you do this, the loan interest is not considered tax deductible. In order for investment loan interest to be tax deductible, you'd need to invest outside of a registered type of account, e.g. using a regular non-tax-sheltered account. Even then, what you can deduct may be limited. Refer to CRA - Line 221 - Carrying charges and interest expenses: You can claim the following carrying charges and interest [...] [...] You cannot deduct on line 221 any of the following amounts:",
"title": ""
}
] |
[
{
"docid": "f5d03797d7499736c830449098a393c1",
"text": "\"Is all interest on a first time home deductible on taxes? What does that even mean? If I pay $14,000 in taxes will My taxes be $14,000 less. Will my taxable income by that much less? If you use the standard deduction in the US (assuming United States), you will have 0 benefit from a mortgage. If you itemize deductions, then your interest paid (not principal) and your property tax paid is deductible and reduces your income for tax purposes. If your marginal tax rate is 25% and you pay $10000 in interest and property tax, then when you file your taxes, you'll owe (or get a refund) of $2500 (marginal tax rate * (amount of interest + property tax)). I have heard the term \"\"The equity on your home is like a bank\"\". What does that mean? I suppose I could borrow using the equity in my home as collateral? If you pay an extra $500 to your mortgage, then your equity in your house goes up by $500 as well. When you pay down the principal by $500 on a car loan (depreciating asset) you end up with less than $500 in value in the car because the car's value is going down. When you do the same in an appreciating asset, you still have that money available to you though you either need to sell or get a loan to use that money. Are there any other general benefits that would drive me from paying $800 in rent, to owning a house? There are several other benefits. These are a few of the positives, but know that there are many negatives to home ownership and the cost of real estate transactions usually dictate that buying doesn't make sense until you want to stay put for 5-7 years. A shorter duration than that usually are better served by renting. The amount of maintenance on a house you own is almost always under estimated by new home owners.\"",
"title": ""
},
{
"docid": "513e076455dc7595ae4eb802a5b8278f",
"text": "\"Regardless of what the credit reporting agencies or brokerages say, the fact is that brokerage margin is not reported to the credit reporting agencies. I have \"\"borrowed\"\" hundreds of thousands of dollars on margin from dozens of brokerages over the years and have never seen a dime of it reported nor have I ever heard of it ever being reported for someone else. ...and it's easy to see why this would be so because \"\"borrowing\"\" on margin isn't really borrowing at all because you always must have positive equity in the account at all times. So you aren't borrowing anything, you just have an investment contract that determines your gains/losses as if you had borrowed (in other words, it's simulated).\"",
"title": ""
},
{
"docid": "01146864ca51d161601ebe09cd8359b9",
"text": "First of all, this is a situation when a consultation with a EA working with S-Corporations in California, CA-licensed CPA or tax preparer (California licenses tax preparers as well) is in order. I'm neither of those, and my answer is not a tax advice of any kind. You're looking at schedule CA line 17 (see page 42 in the 540NR booklet). The instructions refer you to form 3885A. You need to read the instructions carefully. California is notorious for not conforming to the Federal tax law. Specifically, to the issue of the interest attributable to investment in S-Corp, I do not know if CA conforms. I couldn't find any sources saying that it doesn't, but then again - I'm not a professional. It may be that there's an obscure provision invalidating this deduction, living in California myself - I wouldn't be surprised. So I suggest hiring a CA-licensed tax preparer to do this tax return for you, at least for the first year.",
"title": ""
},
{
"docid": "348ecf0fe173c503a0275e31aa820056",
"text": "Revenue Canada allows for some amount of tax deferral via several methods. The point is that none of them allow you to avoid tax, but by deferring from years when you have high income to years when you have lower income allows you to realize less total tax paid due to the marginal rate for personal income tax. The corporate dividend approach (as explained in another answer) is one way. TFSAs are another way, but as you point out, they have limits. Since you brought TFSAs into your question: About the best and easiest tax deferral option available in Canada is the RRSP. If you don't have a company pension, you can contribute something like 18% of your income. If you have a pension plan, you may still be able to contribute to an RRSP as well, but the maximum contribution amount will be lower. The contribution lowers your taxable income which can save you tax. Interest earned on the equity in your RRSP isn't taxed. Tax is only paid on money drawn from the plan because it is deemed income in that year. They are intended for retirement, but you're allowed to withdraw at any time, so if you have little or no income in a year, you can draw money from your RRSP. Tax is withheld, which you may or may not get back depending on your taxable income for that year. You can think of it as a way to level your income and lower your legitimate tax burden",
"title": ""
},
{
"docid": "6d8fae7ab371dc25faf4139cdf4ce360",
"text": "If you itemize your deductions then the interest that you pay on your primary residence is tax deductible. Also realestate tax is also deductible. Both go on Schedule A. The car payment is not tax deductible. You will want to be careful about claiming business deduction for home or car. The IRS has very strict rules and if you have any personal use you can disqualify the deduction. For the car you often need to use the mileage reimbursement rates. If you use the car exclusively for work, then a lease may make more sense as you can expense the lease payment whereas with the car you need to follow the depreciation schedule. If you are looking to claim business expense of car or home, it would be a very good idea to get professional tax advice to ensure that you do not run afoul of the IRS.",
"title": ""
},
{
"docid": "593cbd452c7286b4358b8973a7511d16",
"text": "\"First off, the \"\"mortgage interest is tax deductible\"\" argument is a red herring. What \"\"tax deductible\"\" sounds like it means is \"\"if I pay $100 on X, I can pay $100 less on my taxes\"\". If that were true, you're still not saving any money overall, so it doesn't help you any in the immediate term, and it's actually a bad idea long-term because that mortgage interest compounds, but you don't pay compound interest on taxes. But that's not what it actually means. What it actually means is that you can deduct some percentage of that $100, (usually not all of it,) from your gross income, (not from the final amount of tax you pay,) which reduces your top-line \"\"income subject to taxation.\"\" Unless you're just barely over the line of a tax bracket, spending money on something \"\"tax deductible\"\" is rarely a net gain. Having gotten that out of the way, pay down the mortgage first. It's a very simple matter of numbers: Anything you pay on a long-term debt is money you would have paid anyway, but it eliminates interest on that payment (and all compoundings thereof) from the equation for the entire duration of the loan. So--ignoring for the moment the possibility of extreme situations like default and bank failure--you can consider it to be essentially a guaranteed, risk-free investment that will pay you dividends equal to the rate of interest on the loan, for the entire duration of the loan. The mortgage is 3.9%, presumably for 30 years. The car loan is 1.9% for a lot less than that. Not sure how long; let's just pull a number out of a hat and say \"\"5 years.\"\" If you were given the option to invest at a guaranteed 3.9% for 30 years, or a guaranteed 1.9% for 5 years, which would you choose? It's a no-brainer when you look at it that way.\"",
"title": ""
},
{
"docid": "962afc559efc7ebe9dc4e91ee1f8af04",
"text": "The answer is simple. You can generally claim a deduction for an expense if that expense was used to derive an income. Most business expenses are used to derive profits and income, most individual expenses are not. Of course social policy sometimes gets in the way and allows for deductions where they usually wouldn't be allowed. Regarding the interest on a mortgage being deductible whilst the principal isn't, that is because it is the interest which is the annual expense. By the way deductions for mortgage interest in the USA for a house you live in is only allowed due to social policy, as there is no income (rent) being produced here, unlike with an investment property.",
"title": ""
},
{
"docid": "d3105ab8826e6eb604c6406d337dbae3",
"text": "You can claim a deduction only if all of your business is conducted from the home, i.e. your home is your principal place of business - not just if you work from home sometimes. The CRA (Canada Revenue Agency) has pretty strict guidelines listed here, but once you're sure you qualify for a deduction, the next step would be to determine what portion of your home qualifies. You cannot attempt to deduct your entire mortgage simply because you run your business out of your home. The portion of your mortgage and other related & allowable home expense deductions has to be pro-rated to be equal to or less than the portion of your home you use for business. Simply put, if your business is operated out of a 120 sq-ft self-contained space, and your home's total square-footage is 2400 sq-ft, you can deduct 5% of your expenses (120/2,400 = 0.05). Hope this helps!",
"title": ""
},
{
"docid": "03a37f15aa46d8411b09400ba98544cd",
"text": "This question is indeed rather complicated. Let's simplify it a little bit. Paying down your mortgage makes sense if your expected return in the rest of your portfolio is less than the cost of the mortgage. In many cases, people may also decide to pay down their mortgage because they are risk-averse and do not like carrying debt. There's no tax benefit to doing so, though; Canada doesn't generally allow you to write off mortgage interest, unlike the U.S. As to keeping money in the corporation or not, I'm not going to address that. I don't have a firm enough understanding of corporate taxation. Canadian Couch Potato advises treating all of your investment assets as one large portfolio. That is what you are trying to do here. However, let's consider a different approach. If you do not have enough money to max out your RRSP or TFSA, you may choose to keep your TFSA for an emergency fund, where the money is kept highly liquid. Keep your cash in an interest-bearing TFSA, or perhaps invest it in the money market, inside your TFSA. Then, use your RRSP for the rest of your investment money, split according to your investment goals. This is not the most tax-efficient approach, but it is nice and simple. But you are looking for the most tax-efficient approach. So, let's assume you have enough to more than max out your TFSA and RRSP contributions, and all of your investments are going toward your retirement, which is at least a decade away. Because you are not taxed on your investment income from RRSPs (until you withdraw the money) or TFSA, it makes sense to hold the least tax-efficient investments there. Tax-advantaged investments such as Canadian equities should be held in your investment accounts outside of TFSA and RRSPs. Again, the Canadian Couch Potato has a great article on where to put your investment assets. That article covers interest, dividends, foreign dividends, and capital gains, as well as RRSPs, RESPs, and TFSAs. That article recommends holding Canadian equities in a taxable account, REITs in a tax-sheltered account (TFSA or RRSP), bonds, GICs, and money-market funds in a tax-sheltered account (as these count as interest). The article goes into rather more detail than this, and is worth checking out. It mentions the 15% withholding tax on US-listed ETFs, for example. In addition to that website, I recommend the following three books: The above three resources strongly advocate passive indexed investments, which I like but not everyone agrees with. All three specifically discuss tax implications, which is why I include them here.",
"title": ""
},
{
"docid": "27a5a5296e910059e806233cc78595fd",
"text": "We need more info to give a better answer, but in short: if you assume you will make $0 in other employment income next year, there is a HUGE tax benefit in deferring 50k until next year. Total tax savings would probably be something like $15k [rough estimate]. If you took the RRSP deduction this year, you would save something like 20k this year, but then you would be taxed on it next year if you withdraw it, probably paying another 5k the year after. ie: you would get about the same net tax savings in both years, if you contributed to your RRSP and withdrew next year, vs deferring it to next year. On a non-tax basis, you would benefit by having the cash today, so you could earn investment income on your RRSP, but you would want to go low-risk as you need the money next year, so the most you could earn would be something like 1.5k @ 3%. The real benefit to the RRSP contribution is if you defer your withdrawal into your retirement, because you can further defer your taxes into the future, earning investment income in the meantime. But if you need to withdraw next year, you won't get that opportunity.",
"title": ""
},
{
"docid": "0b0630331cf653228dcda6caa4ac50c8",
"text": "Talk about coincidence, we just recieved letters from our bank saying that our interest only loans will be going up by 0.46% and if we want to keep our lower rate we will need to change early to P&I. Now our Interest only periods end in 6 months to about 16 months anyway. We have decided to change to P&I early and save on our interest expenses. Why? Because the main purpose of investing is to make money not to save on tax. Even if you are on the highest marginal tax rate for every extra dollar of expenses you spend and claim as a deduction you will only get about 50 cents back through tax savings. If you are on the lowest marginal tax rate your tax savings will reduce to less than 20 cents for every extra dollar spent. If you are investing in order to save on tax you may be investing for the wrong reasons. Your primary reason for investing should be to make money, for wealth creation. A good reason to stay with an Interest only loan for an investment property would be if you require the extra cash flow you would receive compared with an I&P loan.",
"title": ""
},
{
"docid": "582a70982b15333402b93f3bec430a88",
"text": "You can defer RRSP deductions like you've suggested. Here's an article from the CBC about it: http://www.cbc.ca/news/business/taxseason/story/2010/03/15/f-taxseason-delay.html",
"title": ""
},
{
"docid": "fdf2d38a190b567b108a45c6335bdf81",
"text": "I'm a Finance major in Finland and here is how it would go here. As you loan money to the company, the company has no income, but gains an asset and a liability. When the company then uses the money to pay the bills it does have expenses that accumulate to the end of the accounting period where they have to be declared. These expenses are payed from the asset gained and has no effect to the liability. When the company then makes a profit it is taxable. How ever this taxable profit may be deducted from from a tax reserve accumulated over the last loss periods up to ten years. When the company then pays the loan back it is divided in principal and interest. The principal payment is a deduction in the company's liabilities and has no tax effect. The interest payment the again does have effect in taxes in the way of decreasing them. On your personal side giving loan has no effect. Getting the principal back has no effect. Getting interest for the loan is taxable income. When there are documents signifying the giving the loan and accounting it over the years, there should be no problem paying it back.",
"title": ""
},
{
"docid": "ca816def6c13f526c18f1951bde048f8",
"text": "lets sat If I buy a house on company's name, It will declared as expense and will deduct from profit. but I am not sure If I can rent it out as a IT LTD company. that's my questions. Buying a house is not an expense, it is a transfer of assets. The house itself, is an asset. So if you have $100,000 in cash, buy a house for $35,000, your total assets will remain the same ($100,000), but your asset mix will be different (instead of $100,000 in cash, you now have $65,000 in cash, and $35,000 in property). You can expense the costs associated with buying the house (e.g. taxes, interest, legal fees), but the house itself stays on the asset side of your balance sheet. To refine the example above, if you buy the house for $35,000, and pay $5,000 in misc fees related to purchasing the house, your assets are now $95,000 ($60,000 in cash, $35,000 in house): the $5,000 reduction is from the actual fees associated with the purchase. It is these fees that lower your profit. Being not familiar with UK rules, in Canada and the US, and likely the UK, you would then depreciate the house over its useful life. The depreciation expense is deducted from your annual net income. If you rent out the house, what you can do is expense any maintenance fees, taxes, etc., on the house itself. This expense will count as a negative towards the rental income, lowering your effective taxable income from the rental. E.g. rent out a flat at $1,000/month, but your property taxes are $3,500/year, so your net income for tax purposes (i.e. your taxable income in this case) is $12,000-$3,500=$8,500.",
"title": ""
},
{
"docid": "55686ea4f3dfab64ced24d67c643cf55",
"text": "Candle stick patterns are generally an indication of possible short term changes in price direction (if a reversal pattern). A doji is such a reversal candle, and should be read as there could be a short term change in the direction of price action. A doji is most effective at peaks or troughs, and the outcome can be a higher probability if occuring during overbought conditions (at the peak) or during oversold conditions (at the trough). So a doji should be used for short term changes in direction and not a total change in the overall trend. Although there could be a doji at the very top of an uptrend or at the very bottom of a downtrend, we wouldn't know it was the change of the trend until price action confirms it. The definition of an uptrend is higher highs and higher lows. The definition of a downtrend is lower lows and lower highs. So an uptrend will not be broken until we have a lower high and confirmed by a lower low, or a lower low confirmed by a lower high. Similarly a downtrend will not be broken until we have a higher low confirmed by a higher high or a higher high followed by a higher low. Another thing to consider is that doji's and other candle stick patters work best when the market is trending, even if they are only short term trends. You should usually wait for confirmation of the change in direction by only taking a long trade if price moves above the high of the doji, or only taking a short trade if price moves below the low of the doji.",
"title": ""
}
] |
fiqa
|
ab87483724591f55c6e344d69fd2ef66
|
Get interest on $100K by spending only $2K using FOREX rollovers?
|
[
{
"docid": "febf4114d614ef8371b4a237f32ce7e9",
"text": "\"I'm smart enough to know that the answer to your questions is 'no'. There is no arbitrage scenario where you can trade currencies and be guaranteed a return. If there were, the thousands of PhD's and quants at hedge funds like DEShaw and Bridgewater would have already figured it out. You're basically trying to come up with a scenario that is risk free yet yields you better than market interest rates. Impossible. I'm not smart enough to know why, but my guess is that your statement \"\"I only need $2k margin\"\" is incorrect. You only need $2k as capital, but you are 'borrowing' on margin the other 98k and you'll need to pay interest on that borrowed amount, every day. You also run the risk of your investment turning sour and the trading firm requiring a higher margin.\"",
"title": ""
},
{
"docid": "cbef79be90e2e82d24e6214699fd271e",
"text": "No free lunch You cannot receive risk-free interest on more money than you actually put down. The construct you are proposing is called 'Carry Trade', and will yield you the interest-difference in exchange for assuming currency risk. Negative expectation In the long run one would expect the higher-yielding currency to devalue faster, at a rate that exactly negates the difference in interest. Net profit is therefore zero in the long run. Now factor in the premium that a (forex) broker charges, and now you may expect losses the size of which depends on the leverage chosen. If there was any way that this could reliably produce a profit even without friction (i.e. roll-over, transaction costs, spread), quants would have already arbitraged it away. Intransparancy Additionaly, in my experience true long-term roll-over costs in relation to interest are a lot harder to compute than, for example, the cost of a stock transaction. This makes the whole deal very intransparant. As to the idea of artificially constructing a USD/USD pair: I regret to tell you that such a construct is not possible. For further info, see this question on Carry Trade: Why does Currency Carry Trade work?",
"title": ""
},
{
"docid": "605802582d7668a70b363758d5881d8e",
"text": "I work at a FOREX broker, and can tell you that what you want to do is NOT possible. If someone is telling you it is, they're lying. You could (in theory) make money from the SWAP (the interest you speak of is called SWAP) if you go both short and long on the same currency, but there are various reasons why this never works. Furthermore, I don't know of any brokers that are paying positive SWAP (the interest you speak of is called SWAP) on any currency right now.",
"title": ""
},
{
"docid": "93ed9100864a8c4146441b8c7bc0dab5",
"text": "Now, is there any clever way to combine FOREX transactions so that you receive the US interest on $100K instead of the $2K you deposited as margin? Yes, absolutely. But think about it -- why would the interest rates be different? Imagine you're making two loans, one for 10,000 USD and one for 10,000 CHF, and you're going to charge a different interest rate on the two loans. Why would you do that? There is really only one reason -- you would charge more interest for the currency that you think is less likely to hold its value such that the expected value of the money you are repaid is the same. In other words, currencies pay a higher interest when their value is expected to go down and currencies pay a lower interest when their value is expected to go up. So yes, you could do this. But the profits you make in interest would have to equal the expected loss you would take in the devaluation of the currency. People will only offer you these interest rates if they think the loss will exceed the profit. Unless you know better than them, you will take a loss.",
"title": ""
}
] |
[
{
"docid": "1611faea12bf19b2154ee123778d95d2",
"text": "\"HSBC, Hang Seng, and other HK banks had a series of special savings account offers when I lived in HK a few years ago. Some could be linked to the performance of your favorite stock or country's stock index. Interest rates were higher back then, around 6% one year. What they were effectively doing is taking the interest you would have earned and used it to place a bet on the stock or index in question. Technically, one way this can be done, for instance, is with call options and zero coupon bonds or notes. But there was nothing to strategize with once the account was set up, so the investor did not need to know how it worked behind the scenes... Looking at the deposit plus offering in particular, this one looks a little more dangerous than what I describe. See, now we are in an economy of low almost zero interest rates. So to boost the offered rate the bank is offering you an account where you guarantee the AUD/HKD rate for the bank in exchange for some extra interest. Effectively they sell AUD options (or want to cover their own AUD exposures) and you get some of that as extra interest. Problem is, if the AUD declines, then you lose money because the savings and interest will be converted to AUD at a contractual rate that you are agreeing to now when you take the deposit plus account. This risk of loss is also mentioned in the fine print. I wouldn't recommend this especially if the risks are not clear. If you read the fine print, you may determine you are better off with a multicurrency account, where you can change your HK$ into any currency you like and earn interest in that currency. None of these were \"\"leveraged\"\" forex accounts where you can bet on tiny fluctuations in currencies. Tiny being like 1% or 2% moves. Generally you should beware anything offering 50:1 or more leverage as a way to possibly lose all of your money quickly. Since you mentioned being a US citizen, you should learn about IRS form TD F 90-22.1 (which must be filed yearly if you have over $10,000 in foreign accounts) and google a little about the \"\"foreign account tax compliance act\"\", which shows a shift of the government towards more strict oversight of foreign accounts.\"",
"title": ""
},
{
"docid": "0848988ee6bf5d902b7090dcbc46de00",
"text": "The location does matter in the case where you introduce currency risk; by leaving you US savings in USD, you're basically working on the assumption that the USD will not lose value against the EUR - if it does and you live in the EUR-zone, you've just misplaced some of your capital. Of course that also works the other way around if the USD appreciates against the EUR, you gained some money.",
"title": ""
},
{
"docid": "28f5fd1be3e440ee825ed5e611e92156",
"text": "\"My visa would put the goods on the current monthly balance which is no-interest, but the cash part becomes part of the immediate interest-bearing sum. There is no option for getting cash without paying immediate interest, except perhaps for buying something then immediately returning it, but most merchants will do a refund to the card instead of cash in hand. This is in New Zealand, other regions may have different rules. Also, if I use the \"\"cheque\"\" or \"\"savings\"\" options at the eftpos machine instead of the \"\"credit\"\" option, then I can have cash immediately, withdrawn from my account, with no interest charge. However the account has to have sufficient balance to do so.\"",
"title": ""
},
{
"docid": "d4617c15d1388f86ec15ea8a6de965f5",
"text": "An offset account is simply a savings account which is linked to a loan account. Instead of earning interest in the savings account and thus having to pay tax on the interest earned, it reduces the amount of interest you have to pay on the loan. Example of a 100% offset account: Loan Amount $100,000, Offset Balance $20,000; you pay interest on the loan based on an effective $80,000 loan balance. Example of a 50% offset account: Loan Account $100,000, Offset Balance $20,000; you pay interest on the loan based on an effective $90,000 loan balance. The benefit of an offset account is that you can put all your income into it and use it to pay all your expenses. The more the funds in the offset account build up the less interest you will pay on your loan. You are much better off having the offset account linked to the larger loan because once your funds in the offset increase over $50,000 you will not receive any further benefit if it is linked to the smaller loan. So by offsetting the larger loan you will end up saving the most money. Also, something extra to think about, if you are paying interest only your loan balance will not change over the interest only period and your interest payments will get smaller and smaller as your offset account grows. On the other hand, if you are paying principal and interest then your loan balance will reduce much faster as your offset account increases. This is because with principal and interest you have a minimum amount to pay each month (made up of a portion of principal and a portion of interest). As the offset account grows you will be paying less interest, so a larger portion of the principal is paid off each month.",
"title": ""
},
{
"docid": "924c06ef4114ce9a9f421443152b2e88",
"text": "\"As previously answered, the solution is margin. It works like this: You deposit e.g. 1'000 USD at your trading company. They give you a margin of e.g. 1:100, so you are allowed to trade with 100'000 USD. Let's say you buy 5'000 pieces of a stock at $20 USD (fully using your 100'000 limit), and the price changes to $20.50 . Your profit is 5000* $0.50 = $2'500. Fast money? If you are lucky. Let's say before the price went up to 20.50, it had a slight dip down to $19.80. Your loss was 5000* $0.2 = 1'000$. Wait! You had just 1000 to begin with: You'll find an email saying \"\"margin call\"\" or \"\"termination notice\"\": Your shares have been sold at $19.80 and you are out of business. The broker willingly gives you this credit, since he can be sure he won't loose a cent. Of course you pay interest for the money you are trading with, but it's only for minutes. So to answer your question: You don't care when you have \"\"your money\"\" back, the trading company will always be there to give you more as long as you have deposit left. (I thought no one should get margin explained without the warning why it is a horrible idea to full use the ridiculous high margins some broker offer. 1:10 might or might not be fine, but 1:100 is harakiri.)\"",
"title": ""
},
{
"docid": "889b617c42eb36f14a26d3441f38a8f3",
"text": "Have you tried calling a Forex broker and asking them if you can take delivery on currency? Their spreads are likely to be much lower than banks/ATMs.",
"title": ""
},
{
"docid": "fff62a931e555cafd9c3710b6eda3f33",
"text": "\"What about the escudo balance in my checking account in Cabo Verde? Are the escudos that I held for months or years, before eventually deciding to change to dollars, considered an investment? Don't know. You tell us. Investment defined as an activity taken to produce income. Did you put the money in the checking account with a full expectation of profits to be made from that? Or you only decided that it is an investment in retrospective, after the result is known, because it provides you more tax benefit? To me it sounds like you have two operating currencies and you're converting between them. Doesn't sound like an investment. Generally, from my experience, bank accounts are not considered investments (even savings accounts aren't). Once you deposit into a CD or bond or money market - you get a cash-equivalent which can be treated as an investment. But that's my personal understanding, if there are large amounts involved, I'd suggest talking to a US-licensed CPA/EA specializing on expats in your area. Pub 54 is really a reference for only the most trivial of the questions an expat may have. It doesn't even begin to describe the complexity of the monstrosity that is called \"\"The US Tax Code for Expats and Foreigners\"\".\"",
"title": ""
},
{
"docid": "cd25cc79df75f8dd9273d36f27a005e1",
"text": "Technically, yes, you can do this. It's a form of arbitrage: you're taking advantage of a small price difference between two markets. But is it worth the hassle of keeping on top of the overdraft and making sure you don't incur any accidental penalties or fees? Interest rates are super low, and floating £1000 or £2000, you're only going to generate £10-20 per year in a basic savings account.",
"title": ""
},
{
"docid": "e673718faaf37ffb0a789565e6e80b43",
"text": "You would need to check with Bank as it varies from Bank to Bank. You can break the FD's. Generally you don't loose the interest you have earned for 1 years, however the rate of interest will be reduced. i.e. if the rate was 7% for 1 year FD and 8% for 2 years FD, when you break after a year you will get only 7%. Generally this can happen in few hours but definitely in 2 days. You can get a Loan against FD's. Generally the rate of interest is 2% higher than FD rate. There is also initial processing fee, etc. Check with the Bank, it may take few days to set things up.",
"title": ""
},
{
"docid": "ca428c4ae49ef766ae9176b7c2efa90a",
"text": "I won't make any assumptions about the source of the money. Typically however, this can be an emotional time and the most important thing to do is not act rashly. If this is an amount of money you have never seen before, getting advice from a fee only financial adviser would be my second step. The first step is to breathe and promise yourself you will NOT make any decisions about this money in the short term. Better to have $100K in the bank earning nearly zero interest than to spend it in the wrong way. If you have to receive the money before you can meet with an adviser, then just open a new savings account at your bank (or credit union) and put the money in there. It will be safe and sound. Visit http://www.napfa.org/ and interview at least three advisers. With their guidance, think about what your goals are. Do you want to invest and grow the money? Pay off debt? Own a home or new large purchase? These are personal decisions, but the adviser might help you think of goals you didn't imagine Create a plan and execute it.",
"title": ""
},
{
"docid": "7395386482e12327b4aac3ac117887ab",
"text": "You can use Norbet's Gambit to convert between USD and CAD either direction. I have never personally done this, but I am planning to convert some CAD to USD soon, so I can invest in USD index funds without paying the typical 2% conversion fee. You must be okay with waiting a few days for the trades to settle, and okay with the fact that the exchange rate will almost certainly change before you sell the shares in the opposite currency. The spread on DLR.TO is about 0.01% - 0.02%, and you also have brokerage commissions and fees. If you use a discount broker the commissions and fees can be quite low. EG. To transfer $5000 USD to CAD using Questrade, you would deposit the USD into a Questrade account and then purchase ~500 units of DLR.U.TO , since it is an ETF there is no commission on the purchase. Then you request that they journal the shares over to DLR.TO and you sell them in CAD (will have about a $5 fee in CAD, and lose about $1 on the spread) and withdraw. The whole thing will have cost you $6 CAD, in lieu of ~$100 you would pay if you did a straightforward conversion with a 2% exchange fee. The difference in fees scales up as the amount you transfer increases. Someone has posted the chat log from when they requested their shares be journaled from DLR.TO to DLR.U.TO here. It looks like it was quite straightforward. Of course there is a time-cost, and the nuisance of signing up for an maintaining an account with a broker if you don't have one already. You can do it on non discount-brokers, but it will only be worth it to do it with a larger amount of money, since the commissions are larger. Note: If you have enough room to hold the CAD amount in your TFSA and will still have that much room at the end of the calendar year, I recommend doing the exchange in a TFSA account. The taxes are minimal unless the exchange rate changes drastically while your trades are settling (from capital gains or losses while waiting a few days for the trades to settle), but they are annoying to calculate, if you do it often. Warning if you do it in a TFSA be sure not to over contribute. Every time you deposit counts as a contribution and your withdrawals don't count against the limit until the next calendar year.",
"title": ""
},
{
"docid": "128d222913be065a4e270541bff04ba4",
"text": "Depends on the countries and their rules about moving money across the border, but in this case that appears entirely reasonable. Of course it would be a gamble unless you can predict the future values of currency better than most folks; there is no guarantee that the exchange rate will move in any particular direction. I have no idea whether any tax is due on profit from currency arbitrage.",
"title": ""
},
{
"docid": "8ee0cf90186bff11bd3da57fd10154e0",
"text": "\"As is so often the case, there is an asterisk next to that 2.5% interest offer. It leads you to a footnote which says: Savings Interest Rate Offer of 2.5% is available between January 1, 2015 and March 31, 2015 on all net new deposits made between January 1, 2015 and March 31, 2015 to a maximum of $250,000.00 per Account registration. You only earn 2.5% interest on deposits made during those three months. Also, on the full offer info page, it says: During the Offer Period, the Bank will calculate Additional Interest on eligible net new deposits and: All interest payments are ineligible for the purposes of calculating Additional Interest and will not be calculated for the purposes of determining eligible daily balances. In other words, any interest paid into an Applicable Account, including Additional Interest, will not be treated as a new deposit for subsequently calculating Additional Interest payments. I couldn't totally parse out all the details of the offer from their legalese, but what it sounds like is you will earn 2.5% interest on money that you deposit into the account during those three months. Any interest you accrue during that time will not count as a deposit in this sense, and so will not earn 2.5% compounded returns. The \"\"During the Offer Period\"\" qualification also makes it sound like this extra interest will only be paid during the three months (presumably at a 2.5% annualized rate, but I can't see where it actually says this). So essentially you are getting a one-time bonus for making deposits during a specific three-month period. The account doesn't really earn 2.5% interest in the normal sense. The long-term interest rate will be what it normally is for their savings accounts, which this page says is 1.05%.\"",
"title": ""
},
{
"docid": "a0a837bb59550e224a7b7b583c1f7dc1",
"text": "You shouldn't be charged interest, unless possibly because your purchases involve a currency conversion. I've made normal purchases that happened to involve changes in currency. The prices were quoted in US$ to me. On the tail end, though, the currency change was treated as a cash advance, which accrues interest immediately.",
"title": ""
},
{
"docid": "97d2304c009c366add62833f7a2fd500",
"text": "You can check the website for the company that manages the fund. For example, take the iShares Nasdaq Biotechnology ETF (IBB). iShares publishes the complete list of the fund's holdings on their website. This information isn't always easy to find or available, but it's a place to start. For some index funds, you should just be able to look up the index the fund is trying to match. This won't be perfect (take Vanguard's S&P 500 ETF (VOO); the fund holds 503 stocks, while the S&P 500 index is comprised of exactly 500), but once again, it's a place to start. A few more points to keep in mind. Remember that many ETF's, including equity ETF's, will hold a small portion of their assets in cash or cash-equivalent instruments to assist with rebalancing. For index funds, this may not be reflected in the index itself, and it may not show up in the list of holdings. VOO is an example of this. However, that information is usually available in the fund's prospectus or the fund's site. Also, I doubt that many stock ETF's, at least index funds, change their asset allocations all that frequently. The amounts may change slightly, but depending on the size of their holdings in a given stock, it's unlikely that the fund's manager would drop it entirely.",
"title": ""
}
] |
fiqa
|
4fe15cc2b6efa6b3f58eaa096abc0ad4
|
How can I figure out how much to bid on a parking space?
|
[
{
"docid": "13eebc93749f883f4ed2b7a6c5550e65",
"text": "If the cash flow information is complete, the valuation can be determined with relative accuracy and precision. Assuming the monthly rent is correct, the annual revenue is $1,600 per year, $250/mo * 12 months - $1,400/year in taxes. Real estate is best valued as a perpetuity where P is the price, i is the income, and r is the rate of interest. Theoreticians would suggest that the best available rate of interest would be the risk free rate, a 30 year Treasury rate ~3.5%, but the competition can't get these rates, so it is probably unrealistic. Anways, aassuming no expenses, the value of the property is $1,600 / 0.035 at most, $45,714.29. This is the general formula, and it should definitely be adjusted for expenses and a more realistic interest rate. Now, with a better understanding of interest rates and expenses, this will predict the most likely market value; however, it should be known that whatever interest rate is applied to the formula will be the most likely rate of return received from the investment. A Graham-Buffett value investor would suggest using a valuation no less than 15% since to a value investor, there's no point in bidding unless if the profits can be above average, ~7.5%. With a 15% interest rate and no expenses, $1,600 / .15, is $10,666.67. On average, it is unlikely that a bid this low will be successful; nevertheless, if multiple bids are placed using this similar methodology, by the law of small numbers, it is likely to hit the lottery on at most one bid.",
"title": ""
},
{
"docid": "7a4517829633220b631b2b74684ce8d1",
"text": "\"Scenario 1: Assume that you plan to keep the parking space for the rest of your life and collect the income from the rental. You say these spaces rent for $250 per month and there are fees of $1400 per year. Are there any other costs? Like would you be responsible for the cost of repaving at some point? But assuming that's covered in the $1400, the net profit is 250 x 12 - 1400 = $1600 per year. So now the question becomes, what other things could you invest your money in, and what sort of returns do those give? If, say, you have investments in the stock market that are generating a 10% annual return and you expect that rate of return to continue indefinitely, than if you pay a price that gives you a return of less than 10%, i.e. if you pay more than $16,000, then you would be better off to put the money in the stock market. That is, you should calculate the fair price \"\"backwards\"\": What return on investment is acceptable, and then what price would I have to pay to get that ROI? Oh, you should also consider what the \"\"occupancy rate\"\" on such parking spaces is. Is there enough demand that you can realistically expect to have it rented out 100% of the time? When one renter leaves, how long does it take to find another? And do you have any information on how often renters fail to pay the rent? I own a house that I rent out and I had two tenants in a row who failed to pay the rent, and the legal process to get them evicted takes months. I don't know what it takes to \"\"evict\"\" someone from a parking space. Scenario 2: You expect to collect rent on this space for some period of time, and then someday sell it. In that case, there's an additional piece of information you need: How much can you expect to get for this property when you sell it? This is almost surely highly speculative. But you could certainly look at past pricing trends. If you see that the value of a parking space in your area has been going up by, whatever, say 4% per year for the past 20 years, it's reasonable to plan on the assumption that this trend will continue. If it's been up and down and all over the place, you could be taking a real gamble. If you pay $30,000 for it today and when the time comes to sell the best you can get is $15,000, that's not so good. But if there is some reasonable consistent average rate of growth in value, you can add this to the expected rents. Like if you can expect it to grow in value by $1000 per year, then the return on your investment is the $1600 in rent plus $1000 in capital growth equals $2600. Then again do an ROI calculation based on potential returns from other investments.\"",
"title": ""
}
] |
[
{
"docid": "137304a6d70a9b27ece9809f15ac64d2",
"text": "I think your math is fine, and also consider insurance costs and the convenience factor of each scenario. Moving a car frequently to avoid parking tickets will become tedious. I'd rather spend an hour renting a car 20 times in a year rather than have to spend 15 minutes moving a car every three days. And if there's no other easy parking, that 15 minutes can take a lot longer. Plus it'll get dirty sitting there, could get vandalized. Yuck. For only 20 days/year, I don't see how owning a car is worth the hassle. I recommend using a credit card that comes with free car rental insurance.",
"title": ""
},
{
"docid": "822e1f9492535c3f6384740dce620347",
"text": "If the company that owns the lot is selling them it is doing so because it feels it will make more money doing so. You need to read carefully what it is you are getting and what the guarantees are from the owner of the property and the parking structure. I have heard from friends in Chicago that said there are people who will sell spaces they do not own as a scam. There are also companies that declare bankruptcy and go out of business after signing long term leases for their spots. They sell the lot to another company(which they have an interest in) and all the leases that they sold are now void so they can resell the spots. Because of this if I were going to invest in a parking space, I would make sure: The company making the offer is reputable and solvent Check for plans for major construction/demolition nearby that would impact your short and long term prospects for rent. Full time Rental would Recoup my investment in less than 5 years. Preferably 3 years. The risk on this is too high for me with out that kind of return.",
"title": ""
},
{
"docid": "0e8002a8483e94f44f69a314c387ea4a",
"text": "I believe @Dilip addressed your question alread, I am going to focus on your second question: What are the criteria one should use for estimating the worth of the situation? The criteria are: I hope this helps.",
"title": ""
},
{
"docid": "ca5eeab62ad25a710f6f6d4e5a082e79",
"text": "No, this is misbehavior of sales software that tries to automatically find the price point which maximizes profit. There have been much worse examples. Ignore it. The robot will eventually see that no sales occurred and try a more reasonable price.",
"title": ""
},
{
"docid": "70591461ef9fce7e7b32b7b259bf14f6",
"text": "The quant aspect '''''. This is the kind of math I was wondering if it existed, but now it sounds like it is much more complex in reality then optimizing by evaluating different cost of capital. Thank you for sharing",
"title": ""
},
{
"docid": "52e40fd08cb30cf52d054148af711b47",
"text": "\"I read a really good tract that my credit union gave me years ago written by a former car salesman about negotiation tactics with car dealers. Wish I could find it again, but I remember a few of the main points. 1) Never negotiate based on the monthly payment amount. Car salesmen love to get you into thinking about the monthly loan payment and often start out by asking what you can afford for a payment. They know that they can essentially charge you whatever they want for the car and make the payments hit your budget by tweaking the loan terms (length, down payment, etc.) 2) (New cars only) Don't negotiate on the price directly. It is extremely hard to compare prices between dealerships because it is very hard to find exactly the same combination of options. Instead negotiate the markup amount over dealer invoice. 3) Negotiate one thing at a time A favorite shell game of car dealers is to get you to negotiate the car price, trade-in price, and financing all at one time. Unless you are a rain-man mathematical genius, don't do it. Doing this makes it easy for them to make concessions on one thing and take them right back somewhere else. (Minus $500 on the new car, plus $200 through an extra half point on financing, etc). 4) Handling the Trade-In 5) 99.9999% of the time the \"\"I forgot to mention\"\" extra items are a ripoff They make huge bonuses for selling this extremely overpriced junk you don't need. 6) Scrutinize everything on the sticker price I've seen car dealers have the balls to add a line item for \"\"Marketing Costs\"\" at around $500, then claim with a straight face that unlike OTHER dealers they are just being upfront about their expenses instead of hiding them in the price of the car. Pure bunk. If you negotiate based on an offset from the invoice instead of sticker price it helps you avoid all this nonsense since the manufacturer most assuredly did not include \"\"Marketing costs\"\" on the dealer invoice. 7) Call Around before closing the deal Car dealers can be a little cranky about this, but they often have an \"\"Internet sales person\"\" assigned to handle this type of deal. Once you know what you want, but before you buy, get the model number and all the codes for the options then call 2-3 dealers and try to get a quote over the phone or e-mail on that exact car. Again, get the quote in terms of markup from dealer invoice price, not sticker price. Going through the Internet sales guy doesn't at all mean you have to buy on the Internet, I still suggest going down to the dealership with the best price and test driving the car in person. The Internet guy is just a sales guy like all the rest of them and will be happy to meet with you and talk through the deal in-person. Update: After recently going through this process again and talking to a bunch of dealers, I have a few things to add: 7a) The price posted on the Internet is often the dealer's bottom line number. Because of sites like AutoTrader and other car marketplaces that let you shop the car across dealerships, they have a lot of incentive to put their rock-bottom prices online where they know people aggressively comparison shop. 7b) Get the price of the car using the stock number from multiple sources (Autotrader, dealer web site, eBay Motors, etc.) and find the lowest price advertised. Then either print or take a screenshot of that price. Dealers sometimes change their prices (up or down) between the time you see it online and when you get to the dealership. I just bought a car where the price went up $1,000 overnight. The sales guy brought up the website and tried to convince me that I was confused. I just pulled up the screenshot on my iPhone and he stopped arguing. I'm not certain, but I got the feeling that there is some kind of bait-switch law that says if you can prove they posted a price they have to honor it. In at least two dealerships they got very contrite and backed away slowly from their bargaining position when I offered proof that they had posted the car at a lower price. 8) The sales guy has ultimate authority on the deal and doesn't need approval Inevitably they will leave the room to \"\"run the deal by my boss/financing guy/mom\"\" This is just a game and negotiating trick to serve two purposes: - To keep you in the dealership longer not shopping at competitors. - So they can good-cop/bad-cop you in the negotiations on price. That is, insult your offer without making you upset at the guy in front of you. - To make it harder for you to walk out of the negotiation and compromise more readily. Let me clarify that last point. They are using a psychological sales trick to make you feel like an ass for wasting the guy's time if you walk out on the deal after sitting in his office all afternoon, especially since he gave you free coffee and sodas. Also, if you have personally invested a lot of time in the deal so far, it makes you feel like you wasted your own time if you don't cross the goal line. As soon as one side of a negotiation forfeits the option to walk away from the deal, the power shifts significantly to the other side. Bottom line: Don't feel guilty about walking out if you can't get the deal you want. Remember, the sales guy is the one that dragged this thing out by playing hide-and-seek with you all day. He wasted your time, not the reverse.\"",
"title": ""
},
{
"docid": "e750f12f5683c48b851b165badc91522",
"text": "\"Do some homework to determine what is really a fair price for the house. Zillow helps. County tax records help, including last sale price and mortgage, if any (yes, it's public). Start at the low end of fair. Don't rely on the Realtor. He gets paid only if a sale occurs, and he's already coaxing you closer to a paycheck. He might be right with the numbers, though, so check for yourself. When you get within a thousand or two of acceptance, \"\"shut up\"\". I don't mean that in a rude way. A negotiating class I took taught me how effective silence can be, at the right time. The other side knows you're close and the highest you've offered. If they would be willing to find a way to come down to that, this is the time. The awkward silence is surprisingly effective.\"",
"title": ""
},
{
"docid": "e513a42cc62175045e50d61a634a5d83",
"text": "If an offered price is below what people are willing to sell for, it is simply ignored. (What happens if I offer to buy lots of cars as long as I only have to pay $2 each? Same thing.)",
"title": ""
},
{
"docid": "7e5b4f091f7a0e9f2328d42e944873bc",
"text": "I don't believe you would be able to with only Net Sales and COGS. Are you talking about trying to estimate them? Because then I could probably come up with an idea based on industry averages, etc. I think you would need to know the average days outstanding, inventory turnover and the terms they're getting from their vendors to calculate actuals. There may be other ways to solve the problem you're asking but thats my thoughts on it.",
"title": ""
},
{
"docid": "9a52969d6de27e78057142e53b34db9c",
"text": "You're realizing the perils of using a DCF analysis. At best, you can use them to get a range of possible values and use them as a heuristic, but you'll probably find it difficult to generate a realistic estimate that is significantly different than where the price is already.",
"title": ""
},
{
"docid": "1423a5b34e0ba05d007a623a2b02f8ec",
"text": "To calculate you take the Price and divide it by the Earnings, or by the Sales, or by the Free Cash Flow. Most of these calculations are done for you on a lot of finance sites if the data is available. Such sites as Yahoo Finance and Google Finance as well as my personal favorite: Morningstar",
"title": ""
},
{
"docid": "c18cae75fef4be13785d41f25b2afd15",
"text": "The usual lazy recommendation: See what similar objects, in similar condition, of similar age, have sold for recently on eBay. That establishes a fair market value by directly polling the market.",
"title": ""
},
{
"docid": "adbf875f8d2517033d641b19a42c1ad0",
"text": "\"1) Get some gold. 2) Walk around, yelling, \"\"Hey, I have some gold, who wants to buy it?\"\" 3) Once you have enough interested parties, hold an auction and see who will give you the most dollars for it. 4) Trade the gold for that many dollars. 5) You have just measured the value of your gold.\"",
"title": ""
},
{
"docid": "70d0915408fb98db5d2f5e7cb0c31731",
"text": "Assuming cell A1 contains the number of trades: will price up to A1=100 at 17 each, and the rest at 14 each. The key is the MAX and MIN. They keep an item from being counted twice. If X would end up negative, MAX(0,x) clamps it to 0. By extension, if X-100 would be negative, MAX(0, X-100) would be 0 -- ie: that number doesn't increase til X>100. When A1=99, MIN(a1,100) == 99, and MAX(0,a1-100) == 0. When A1=100, MIN(a1,100) == 100, and MAX(0,a1-100) == 0. When A1=101, MIN(a1,100) == 100, and MAX(0,a1-100) == 1. Of course, if the 100th item should be $14, then change the 100s to 99s.",
"title": ""
},
{
"docid": "cf436e92c85791cdbc4cce4ca62c946d",
"text": "\"I think there's a measure of confirmation bias here. If you talk to somebody that started a successful business and got a million out of it, he'd say \"\"it's easy, just do this and that, like I did\"\". If you consider this as isolated incident, you would ignore thousands of others that did exactly the same and still struggle to break even, or are earning much less, or just went broke and moved on long time ago. You will almost never hear about these as books titled \"\"How I tried to start a business and failed\"\" sell much worse than success stories. So I do not think there's a guaranteed easy way - otherwise we'd have much more millionaires than we do now :) However, it does not mean any of those ways is not worth trying - whatever failure rate there is, it's less than 100% failure rate of not trying anything. You have to choose what fits your abilities and personality best - frugality, risk, inventiveness? Then hope you get as lucky as those \"\"it's easy\"\" people are, I guess.\"",
"title": ""
}
] |
fiqa
|
07fcd41ea3fc7142f4f41c0231960e9f
|
Paying off mortgage or invest in annuity
|
[
{
"docid": "7a0bb7979da8c6d219194fbe361f039b",
"text": "You can't pay your bills with equity in your house. Assuming you paid off the mortgage, where would the money come from that you plan to live off of? If that is your whole retirement savings I'd say do neither. Maybe an annuity (not variable) for SOME of the money, keep the rest invested in conservative investments some of it in cash for emergencies.",
"title": ""
},
{
"docid": "359d3c194143a1f84f2c482a5df6ebdc",
"text": "\"There is no formula to answer the question. You have to balance return on investment with risk. There's also the question of whether you have any children or other heirs that you would like to leave money to. The mortgage is presumably a guaranteed thing: you know exactly how much the payments will be for the rest of the loan. I think most annuities have a fixed rate of return, but they terminate when you both die. There are annuities with a variable return, but usually with a guaranteed minimum. So if you got an annuity with a fixed 3.85% return, and you lived exactly 18 more years, then (ignoring tax implications), there'd be no practical difference between the two choices. If you lived longer than 18 years, the annuity would be better. If less, paying off the mortgage would be better. Another option to consider is doing neither, but keeping the money in the 401k or some other investment. This will usually give better than 3.85% return, and the principal will be available to leave to your heirs. The big drawback to this is risk: investments in the stock market and the like usually do better than 3 or 4%, but not always, and sometimes they lose money. Earlier I said \"\"ignoring tax implications\"\". Of course that can be a significant factor. Mortgages get special tax treatment, so the effective interest rate on a mortgage is less than the nominal rate. 401ks also get special tax treatment. So this complicates up calculations trying to compare. I can't give definitive numbers without knowing the returns you might get on an annuity and your tax situation.\"",
"title": ""
},
{
"docid": "fb78091094c61cbf35643c978ba23f06",
"text": "I am in the process of writing an article about how to maximize one's Social Security benefits, or at least, how to start the analysis. This chart, from my friends at the Social Security office shows the advantage of waiting to take your benefit. In your case, you are getting $1525 at age 62. Now, if you wait 4 years, the benefit jumps to $2033 or $508/mo more. You would get no benefit for 4 years and draw down savings by $73,200, but would get $6,096/yr more from 64 on. Put it off until 70, and you'd have $2684/mo. At some point, your husband should apply for a spousal benefit (age 66 for him is what I suggest) and collect that for 4 years before moving to his own benefit if it's higher than that. Keep in mind, your generous pensions are likely to push you into having your social security benefit taxed, and my plan, above will give you time to draw down the 401(k) to help avoid or at least reduce this.",
"title": ""
}
] |
[
{
"docid": "74b3f1e58bda2b062d3ad816837fd262",
"text": "Certainly, paying off the mortgage is better than doing nothing with the money. But it gets interesting when you consider keeping the mortgage and investing the money. If the mortgage rate is 5% and you expect >5% returns from stocks or some other investment, then it might make sense to seek those higher returns. If you expect the same 5% return from stocks, keeping the mortgage and investing the money can still be more tax-efficient. Assuming a marginal tax rate of 30%, the real cost of mortgage interest (in terms of post-tax money) is 3.5%*. If your investment results in long-term capital gains taxed at 15%, the real rate of growth of your post-tax money would be 4.25%. So in post-tax terms, your rate of gain is greater than your rate of loss. On the other hand, paying off the mortgage is safer than investing borrowed money, so doing so might be more appropriate for the risk-averse. * I'm oversimplifying a bit by assuming the deduction doesn't change your marginal tax rate.",
"title": ""
},
{
"docid": "c4d9894d7f966b3aa952a5e5fe5676c0",
"text": "\"The mortgage has a higher interest rate, how can it make sense to pay off the HELOC first?? As for the mutual fund, it comes down to what returns you are expecting. If the after-tax return is higher than the mortgage rate then invest, otherwise \"\"invest\"\" in paying down the mortgage. Note that paying down debt is usually the best investment you have.\"",
"title": ""
},
{
"docid": "dbb1a5aaa7bc8c7f62db10fa77815473",
"text": "Based on your numbers, it sounds like you've got 12 years left in the private student loan, which just seems to be an annoyance to me. You have the cash to pay it off, but that may not be the optimal solution. You've got $85k in cash! That's way too much. So your options are: -Invest 40k -Pay 2.25% loan off -Prepay mortgage 40k Play around with this link: mortgage calculator Paying the student loan, and applying the $315 to the monthly mortgage reduces your mortgage by 8 years. It also reduces the nag factor of the student loan. Prepaying the mortgage (one time) reduces it by 6 years. (But, that reduces the total cost of the mortgage over it's lifetime the most) Prepaying the mortgage and re-amortizing it over thirty years (at the same rate) reduces your mortgage payment by $210, which you could apply to the student loan, but you'd need to come up with an extra $105 a month.",
"title": ""
},
{
"docid": "bf79dde3dc875f2fbf63f83f73b19f09",
"text": "See my recent answer to a similar question on prepaying a mortgage versus investing in IRA. The issue here is similar: you want to compare the relative rates of funding your retirement account versus paying down your debt. If you can invest at a better rate than you are paying on your debt, with similar risk, then you should invest. Otherwise, pay down your debt. The big difference with your situation is that you have a variable rate loan, so there's a significant risk that the rate on it will go up. If I was in your shoes, I would do the following: But that's me. If you're more debt-averse, you may decide to prepay that fixed rate loan too.",
"title": ""
},
{
"docid": "1dd669d41dae2b13de2963af30ee98d2",
"text": "\"First, I would recommend getting rid of this ridiculous debt, or remember this day and this answer, \"\"you will be living this way for many years to come and maybe worse, no/not enough retirement\"\". Hold off on any retirement savings right now so that the money can be used to crush this debt. Without knowing all of your specifics (health insurance deductions, etc.) and without any retirement contribution, given $190,000 you should probably be taking home around $12,000 per month total. Assuming a $2,000 mortgage payment (30 year term), that is $10,000 left per month. If you were serious about paying this off, you could easily live off of $3,000 per month (probably less) and have $7,000 left to throw at the student loan debt. This assumes that you haven't financed automobiles, especially expensive ones or have other significant debt payments. That's around 3 years until the entire $300,000 is paid! I have personally used and endorse the snowball method (pay off smallest to largest regardless of interest rate), though I did adjust it slightly to pay off some debts first that had a very high monthly payment so that I would then have this large payment to throw at the next debt. After the debt is gone, you now have the extra $7,000 per month (probably more if you get raises, bonuses etc.) to enjoy and start saving for retirement and kid's college. You may have 20-25 years to save for retirement; at $4,000 per month that's $1 million in just savings, not including the growth (with moderate growth this could easily double or more). You'll also have about 14 years to save for college for this one kid; at $1,500 per month that's $250,000 (not including investment growth). This is probably overkill for one kid, so adjust accordingly. Then there's at least $1,500 per month left to pay off the mortgage in less than half the time of the original term! So in this scenario, conservatively you might have: Obviously I don't know your financials or circumstances, so build a good budget and play with the numbers. If you sacrifice for a short time you'll be way better off, trust me from experience. As a side note: Assuming the loan debt is 50/50 you and your husband, you made a good investment and he made a poor one. Unless he is a public defender or charity attorney, why is he making $60,000 when you are both attorneys and both have huge student loan debt? If it were me, I would consider a job change. At least until the debt was cleaned up. If he can make $100,000 to $130,000 or more, then your debt may be gone in under 2 years! Then he can go back to the charity gig.\"",
"title": ""
},
{
"docid": "a0b685b88b9cb09a1db6a3610f331f35",
"text": "As other's have said, paying off the student loan first makes the most sense because of That said, are you planning on staying in your house for a particularly long time? If so, refinancing your mortgage into a fixed-rate loan might be the best use of your money long term. Not sure how much time is left on your 5/1 ARM before the rate starts to float, but if rates rise, your mortgage could quickly become more expensive than your student loan.",
"title": ""
},
{
"docid": "df0515b8e229a35936b1f259d49b8ea3",
"text": "I like this option, rather than exposing all 600k to market risk, I'd think of paying off the mortgage as a way to diversify my portfolio. Expose 400k to market risk, and get a guaranteed 3.75% return on that 200k (in essence). Then you can invest the money you were putting towards your mortgage each month. The potential disadvantage, is that the extra 200k investment could earn significantly more than 3.75%, and you'd lose out on some money. Historically, the market beats 3.75%, and you'd come out ahead investing everything. There's no guarantee. You also don't have to keep your money invested, you can change your position down the road and pay off the house. I feel best about a paid off house, but I know that my sense of security carries opportunity cost. Up to you to decide how much risk you're willing to accept. Also, if you don't have an emergency fund, I'd set up that first and then go from there with investing/paying off house.",
"title": ""
},
{
"docid": "a31a9db361a97b55d29f3aaf7dc22cfc",
"text": "Other answers are already very good, but I'd like to add one step before taking the advice of the other answers... If you still can, switch to a 15 year mortgage, and figure out what percentage of your take-home pay the new payment is. This is the position taken by Dave Ramsey*, and I believe this will give you a better base from which to launch your other goals for two reasons: Since you are then paying it off faster at a base payment, you may then want to take MrChrister's advice but put all extra income toward investments, feeling secure that your house will be paid off much sooner anyway (and at a lower interest rate). * Dave's advice isn't for everyone, because he takes a very long-term view. However, in the long-term, it is great advice. See here for more. JoeTaxpayer is right, you will not see anything near guaranteed yearly rates in mutual funds, so make sure they are part of a long-term investing plan. You are not investing your time in learning the short-term stock game, so stay away from it. As long as you are continuing to learn in your own career, you should see very good short-term gains there anyway.",
"title": ""
},
{
"docid": "e4ad5de991424ab48e01a72ac5cbd3ac",
"text": "\"I'll assume you live in the US for the start of my answer - Do you maximize your retirement savings at work, at least getting your employer's match in full, if they do this. Do you have any other debt that's at a higher rate? Is your emergency account funded to your satisfaction? If you lost your job and tenant on the same day, how long before you were in trouble? The \"\"pay early\"\" question seems to hit an emotional nerve with most people. While I start with the above and then segue to \"\"would you be happy with a long term 5% return?\"\" there's one major point not to miss - money paid to either mortgage isn't liquid. The idea of owing out no money at all is great, but paying anything less than \"\"paid in full\"\" leaves you still owing that monthly payment. You can send $400K against your $500K mortgage, and still owe $3K per month until paid. And if you lose your job, you may not so easily refinance the remaining $100K to a lower payment so easily. If your goal is to continue with real estate, you don't prepay, you save cash for the next deal. Don't know if that was your intent at some point. Disclosure - my situation - Maxing out retirement accounts was my priority, then saving for college. Over the years, I had multiple refinances, each of which was a no-cost deal. The first refi saved with a lower rate. The second, was in early 2000s when back interest was so low I took a chunk of cash, paid principal down and went to a 20yr from the original 30. The kid starts college, and we target retirement in 6 years. I am paying the mortgage (now 2 years into a 10yr) to be done the month before the kid flies out. If I were younger, I'd be at the start of a new 30 yr at the recent 4.5% bottom. I think that a cost of near 3% after tax, and inflation soon to near/exceed 3% makes borrowing free, and I can invest conservatively in stocks that will have a dividend yield above this. Jane and I discussed the plan, and agree to retire mortgage free.\"",
"title": ""
},
{
"docid": "1313281ff8064d868e5ab7c3094bc434",
"text": "It all depends on your priorities, but if it were me I'd work to get rid of that debt as your first priority based on a few factors: I might shift towards the house if you think you can save enough to avoid PMI, as the total savings would probably be more in aggregate if you plan on buying a house anyway with less than 20% down. Of course, all this is lower priority than funding your retirement at least up to the tax advantaged and/or employer matched maximums, but it sounds like you have that covered.",
"title": ""
},
{
"docid": "513293e3d919d4f98426df907777bc61",
"text": "I want to start investing money, as low risk as possible, but with a percentage growth of at least 4% over 10 - 15 years. ...I do have a mortgage, Then there's your answer. You get a risk-free return of the interest rate on your mortgage (I'm assuming it's more than 4%). Every bit you put toward your mortgage reduces the amount of interest you pay by the interest rate, helping you to pay it off faster. Then, once your mortgage is paid off, you can look at other investments that fit your risk tolerance and return requirements. That said, make sure you have enough emergency savings to reduce cash flow interruptions, and make sure you don't have any other debts to pay. I'm not saying that everyone with a mortgage should pay it off before other investments. You asked for a low-risk 4% investment, which paying your mortgage would accomplish. If you want more return (and more risk) then other investments would be appropriate. Other factors that might change your decision might be:",
"title": ""
},
{
"docid": "c8aea3fd2ed6a452833e4113135fef07",
"text": "So I will attempt to answer the other half of the question since people have given good feedback on the mortgage costs of your various options. Assumptions: It is certain that I am off on some (or all) of these assumptions, but they are still useful for drawing a comparison. If you were to make your mortgage payment, then contribute whatever you have left over to savings, this is where you would be at the end of 30 years. Wait, so the 30 year mortgage has me contributing $40k less to savings over the life of the loan, but comes out with a $20k higher balance? Yes, because of the way compounding interest works getting more money in there faster plays in your favor, but only as long as your savings venue is earning at a higher rate than the cost of the debt your are contrasting it with. If we were to drop the yield on your savings to 3%, then the 30yr would net you $264593, while the 15yr ends up with $283309 in the bank. Similarly, if we were to increase the savings yield to 10% (not unheard of for a strong mutual fund), the 30yr nets $993418, while the 15yr comes out at $684448. Yes in all cases, you pay more to the bank on a 30yr mortgage, but as long as you have a decent investment portfolio, and are making the associated contributions, your end savings come out ahead over the time period. Which sounds like it is the more important item in your overall picture. However, just to reiterate, the key to making this work is that you have an investment portfolio that out performs the interest on the loan. Rule of thumb is if the debt is costing you more than the investment will reliably earn, pay the debt off first. In reality, you need your investments to out perform the interest on your debt + inflation to stay ahead overall. Personally, I would be looking for at least an 8% annual return on your investments, and go with the 30 year option. DISCLAIMER: All investments involve risk and there is no guarantee of making any given earnings target.",
"title": ""
},
{
"docid": "64bf683b2cb764773bfa0664236dc782",
"text": "Others have suggested paying off the student loan, mostly for the satisfaction of one less payment, but I suggest you do the math on how much interest you would save by paying early on each of the loans: When you do the calculations I think you'll see why paying toward the debt with the highest interest rate is almost always the best advice. Whether you can refinance the mortgage to a lower rate is a separate question, but the above calculation would still apply, just with different amortization schedules.",
"title": ""
},
{
"docid": "32a5505c4337f438c896c4c4fe254687",
"text": "\"A major thing to consider when deciding whether to invest or pay off debt is cash flow. Specifically, how each choice affects your cash flow, and how your cash flow is affected by various events. Simply enough, your cash flow is the amount of money that passes through your finances during a given period (often a month or a year). Some of this is necessary payments, like staying current on loans, rent, etc., while other parts are not necessary, such as eating out. For example, you currently have $5,500 debt at 3% and another $2,500 at 5%. This means that every month, your cashflow effect of these loans is ($5,500 * 3% / 12) + ($2,500 * 5% / 12) = $24 interest (before any applicable tax effects), plus any required payments toward the principal which you don't state. To have the $8,000 paid off in 30 years, you'd be paying another $33 toward the principal, for a total of about $60 per month before tax effects in your case. If you take the full $7,000 you have available and use it to pay off the debt starting with the higher-interest loan, then your situation changes such that you now: Assuming that the repayment timeline remains the same, the cashflow effect of the above becomes $1,000 * 3% / 12 = $2.50/month interest plus $2.78/month toward the principal, again before tax effects. In one fell swoop, you just reduced your monthly payment from $60 to $5.25. Per year, this means $720 to $63, so on the $7,000 \"\"invested\"\" in repayment you get $657 in return every year for a 9.4% annual return on investment. It will take you about 11 years to use only this money to save another $7,000, as opposed to the 30 years original repayment schedule. If the extra payment goes toward knocking time off the existing repayment schedule but keeping the amount paid toward the principal per month the same, you are now paying $33 toward the principal plus $2.50 interest against the $1,000 loan, which means by paying $35.50/month you will be debt free in 30 months: two and a half years, instead of 30 years, an effective 92% reduction in repayment time. You immediately have another about $25/month in your budget, and in two and a half years you will have $60 per month that you wouldn't have if you stuck with the original repayment schedule. If instead the total amount paid remains the same, you are then paying about $57.50/month toward the principal and will be debt free in less than a year and a half. Not too shabby, if you ask me. Also, don't forget that this is a known, guaranteed return in that you know what you would be paying in interest if you didn't do this, and you know what you will be paying in interest if you do this. Even if the interest rate is variable, you can calculate this to a reasonable degree of certainty. The difference between those two is your return on investment. Compare this to the fact that while an investment in the S&P might have similar returns over long periods of time, the stock market is much more volatile in the shorter term (as the past two decades have so eloquently demonstrated). It doesn't do you much good if an investment returns 10% per year over 30 years, if when you need the money it's down 30% because you bought at a local peak and have held the investment for only a year. Also consider if you go back to school, are you going to feel better about a $5.25/month payment or a $60/month payment? (Even if the payments on old debt are deferred while you are studying, you will still have to pay the money, and it will likely be accruing interest in the meantime.) Now, I really don't advocate emptying your savings account entirely the way I did in the example above. Stuff happens all the time, and some stuff that happens costs money. Instead, you should be keeping some of that money easily available in a liquid, non-volatile form (which basically means a savings account without withdrawal penalties or a money market fund, not the stock market). How much depends on your necessary expenses; a buffer of three months' worth of expenses is an often recommended starting point for an emergency fund. The above should however help you evaluate how much to keep, how much to invest and how much to use to pay off loans early, respectively.\"",
"title": ""
},
{
"docid": "072e32c49d800eee114844c789d21f4e",
"text": "I would be very careful with annuity products. If you don't mind sharing, what are the terms for the annuity? Usually I would recommend not to use retirement account to pay off debt, mainly because of the penalty that comes from withdrawing prematurely. But in this case, First of all, stop contributing to the annuity account if you're not contractually obligated. Second, try to convert your annuity assets to more common equity/debt products. Thirdly, try to cut back on spending to pay off debt, assuming you stopped paying 2X on housing, since 30k debt shouldn't be that hard to pay off with 100k income. Lastly, if all of the above are impossible, you can withdraw from that account to pay off your debt.",
"title": ""
}
] |
fiqa
|
c8b81550d0e99c64ab5517d8f63bdab3
|
What would be a wise way to invest savings for a newly married couple?
|
[
{
"docid": "9182607a4ada87e464e537e88a5480b0",
"text": "Forgive me as I do not know much about your fine country, but I do know one thing. You can make 5% risk free guaranteed. How, from your link: If you make a voluntary repayment of $500 or more, you will receive a bonus of 5 per cent. This means your account will be credited with an additional 5 per cent of the value of your payment. I'd take 20.900 of that amount saved and pay off her loan tomorrow and increase my net worth by 22.000. I'd also do the same thing for your loan. In fact in someways it is more important to pay off your loan first. As I understand it, you will eventually have to pay your loan back once your income rises above a threshold. Psychologically you make attempt to retard your future income in order to avoid payback. Those decisions may not be made overtly but it is likely they will be made. So by the end of the day (or as soon as possible), I'd have a bank balance of 113,900 and no student loan debt. This amounts to a net increase in net worth of 1900. It is a great, safe, first investment.",
"title": ""
},
{
"docid": "70d0648d0d891a395ad640a3a2e267a7",
"text": "First, keep about six months' expenses in immediately-available form (savings account or similar). Second, determine how long you expect to hold on to the rest of it. What's your timeframe for buying a house or starting a family? This determines what you should do with the rest of it. If you're buying a house next year, then a CD (Certificate of Deposit) is a reasonable option; low-ish interest reate, but something, probably roughly inflation level, and quite safe - and you can plan things so it's available when you need it for the down payment. If you've got 3-5 years before you want to touch this money, then invest it in something reasonably safe. You can find reasonable funds that have a fairly low risk profile - usually a combination of stock and bonds - with a few percent higher rate of return on average. Still could lose money, but won't be all that risky. If you've got over five years, then you should probably invest them in an ETF that tracks a large market sector - in the US I'd suggest VOO or similar (Vanguard's S&P 500 fund), I'm sure Australia has something similar which tracks the larger market. Risky, but over 5+ years unlikely to lose money, and will likely have a better rate of return than anything else (6% or higher is reasonable to expect). Five years is long enough that it's vanishingly unlikely to lose money over the time period, and fairly likely to make a good return. Accept the higher risk here for the greater return; and don't cringe when the market falls, as it will go up again. Then, when you get close to your target date, start pulling money out of it and into CDs or safer investments during up periods.",
"title": ""
},
{
"docid": "1767804e4818a27da97de8602bf19757",
"text": "\"I agree with @Pete that you may be well-advised to pay off your loans first and go from there. Even though you may not be \"\"required\"\" to make payments on your own loan based on your income, that debt will play a large factor in your borrowing ability until it is gone, which hinders your ability to move toward home ownership. If you are in a fortunate enough position to totally pay off both your loan and hers from cash on hand then you should. It would still leave you with more than $112,000 and no debt, which is a big priority and advantage for a young couple. Mind you, this doesn't keep you from starting an investment plan with some portion of the remaining funds (the advice to keep six months' income in the bank is very wise) through perhaps a mutual fund if you don't want to directly manage the investments yourself. The advantage of mutual funds is the ability to choose the level of risk you're willing to take and let professionals manage how to achieve your goals for you. You can always make adjustments to your funds as your circumstances change. Again, I'd emphasize ridding yourself of the student loan debt as the first move, then looking at how to invest the remainder.\"",
"title": ""
}
] |
[
{
"docid": "2e0e28f088f2ad5624434a638e3881f3",
"text": "Secondly, should we pay off his student loans before investing? The subsidized loans won't be gaining any interest until he graduates so I was wondering if we should just pay off the unsubsidized loans and keep the subsidized ones for the next two years? From a purely financial standpoint, if the interest you gain on your savings is higher than the interest of the debt, then no. Otherwise, yes. If we were to keep 5,000 in savings and pay of the 3,000 of unsubsidized loans as I described above, that would leave us with about 15,000 dollars that is just lying around in my savings account. How should I invest this? Would you recommend high risk or low risk investments? I'm not from the US so take my answer with caution, but to me $15,000 seems a minimum safety net. Then again, it depends very much on any external help you can get in case of an emergency.",
"title": ""
},
{
"docid": "17ca7c806e458a344150bca1b1c60fa6",
"text": "\"There's a lot of personal preference and personal circumstance that goes into these decisions. I think that for a person starting out, what's below is a good system. People with greater needs probably aren't reading this question looking for an answer. How many bank accounts should I have and what kinds, and how much (percentage-wise) of my income should I put into each one? You should probably have one checking account and one savings / money market account. If you're total savings are too low to avoid fees on two accounts, then just the checking account at the beginning. Keep the checking account balance high enough to cover your actual debits plus a little buffer. Put the rest in savings. Multiple bank accounts beyond the basics or using multiple banks can be appropriate for some people in some circumstances. Those people, for the most part, will have a specific reason for needing them and maybe enough experience at that point to know how many and where to get them. (Else they ask specific questions in the context of their situation.) I did see a comment about partners - If you're married / in long-term relationship, you might replicate the above for each side of the marriage / partnership. That's a personal decision between you and your partner that's more about your philosophy in the relationship then about finance specifically. Then from there, how do I portion them out into budgets and savings? I personally don't believe that there is any generic answer for this question. Others may post answers with their own rules of thumb. You need to budget based on a realistic assessment of your own income and necessary costs. Then if you have money some savings. Include a minimal level of entertainment in \"\"necessary costs\"\" because most people cannot work constantly. Beyond that minimal level, additional entertainment comes after necessary costs and basic savings. Savings should be tied to your long term goals in addition to you current constraints. Should I use credit cards for spending to reap benefits? No. Use credit cards for the convenience of them, if you want, but pay the full balance each month and don't overdo it. If you lack discipline on your spending, then you might consider avoiding credit cards completely.\"",
"title": ""
},
{
"docid": "2cd417b896d953ed5d5f667607a01b85",
"text": "There is no issue - and no question - if you get married. The question is only relevant in the event that you go separate ways. Should that happen, you imply that you would want to refund whatever amount your girlfriend has paid toward the mortgage. The solution, then, would seem to be to exempt her from any payments, as you will either give that money back to her (if you break up) or make her a co-owner of the condo (if you get married). If you actually need her contributions to the monthly nut, you could give her a written agreement whereby you would refund her money (plus interest) at her discretion.",
"title": ""
},
{
"docid": "481467d7deea46bb5ea3a473c02ce5ef",
"text": "\"Pay off the credit cards. From now on, pay off the credit cards monthly. Under no circumstances should you borrow money. You have net worth but no external income. Borrowing is useless to you. $200,000 in two bank accounts, because if one bank collapses, you want to have a spare while you wait for the government to pay off the guarantee. Keep $50,000 in checking and another $50k in savings. The remainder put into CDs. Don't expect interest income beyond inflation. Real interest rates (after inflation) are often slightly negative. People ask why you might keep money in the bank rather than stocks/bonds. The problem is that stocks/bonds don't always maintain their value, much less go up. The bank money won't gain, but it won't suddenly lose half its value either. It can easily take five years after a stock market crash for the market to recover. You don't want to be withdrawing from losses. Some people have suggested more bonds and fewer stocks. But putting some of the money in the bank is better than bonds. Bonds sometimes lose money, like stocks. Instead, park some of the money in the bank and pick a more aggressive stock/bond mixture. That way you're never desperate for money, and you can survive market dips. And the stock/bond part of the investment will return more at 70/30 than 60/40. $700,000 in stock mutual funds. $300,000 in bond mutual funds. Look for broad indexes rather than high returns. You need this to grow by the inflation rate just to keep even. That's $20,000 to $30,000 a year. Keep the balance between 70/30 and 75/25. You can move half the excess beyond inflation to your bank accounts. That's the money you have to spend each year. Don't withdraw money if you aren't keeping up with inflation. Don't try to time the market. Much better informed people with better resources will be trying to do that and failing. Play the odds instead. Keep to a consistent strategy and let the market come back to you. If you chase it, you are likely to lose money. If you don't spend money this year, you can save it for next year. Anything beyond $200,000 in the bank accounts is available for spending. In an emergency you may have to draw down the $200,000. Be careful. It's not as big a cushion as it seems, because you don't have an external income to replace it. I live in southern California but would like to move overseas after establishing stable investments. I am not the type of person that would invest in McDonald's, but would consider other less evil franchises (maybe?). These are contradictory goals, as stated. A franchise (meaning a local business of a national brand) is not a \"\"stable investment\"\". A franchise is something that you actively manage. At minimum, you have to hire someone to run the franchise. And as a general rule, they aren't as turnkey as they promise. How do you pick a good manager? How will you tell if they know how the business works? Particularly if you don't know. How will you tell that they are honest and won't just embezzle your money? Or more honestly, give you too much of the business revenues such that the business is not sustainable? Or spend so much on the business that you can't recover it as revenue? Some have suggested that you meant brand or stock rather than franchise. If so, you can ignore the last few paragraphs. I would be careful about making moral judgments about companies. McDonald's pays its workers too little. Google invades privacy. Exxon is bad for the environment. Chase collects fees from people desperate for money. Tesla relies on government subsidies. Every successful company has some way in which it can be considered \"\"evil\"\". And unsuccessful companies are evil in that they go out of business, leaving workers, customers, and investors (i.e. you!) in the lurch. Regardless, you should invest in broad index funds rather than individual stocks. If college is out of the question, then so should be stock investing. It's at least as much work and needs to be maintained. In terms of living overseas, dip your toe in first. Rent a small place for a few months. Find out how much it costs to live there. Remember to leave money for bigger expenses. You should be able to live on $20,000 or $25,000 a year now. Then you can plan on spending $35,000 a year to do it for real (including odd expenses that don't happen every month). Make sure that you have health insurance arranged. Eventually you may buy a place. If you can find one that you can afford for something like $100,000. Note that $100,000 would be low in California but sufficient even in many places in the US. Think rural, like the South or Midwest. And of course that would be more money in many countries in South America, Africa, or southern Asia. Even southern and eastern Europe might be possible. You might even pay a bit more and rent part of the property. In the US, this would be a duplex or a bed and breakfast. They may use different terms elsewhere. Given your health, do you need a maid/cook? That would lean towards something like a bed and breakfast, where the same person can clean for both you and the guests. Same with cooking, although that might be a second person (or more). Hire a bookkeeper/accountant first, as you'll want help evaluating potential purchases. Keep the business small enough that you can actively monitor it. Part of the problem here is that a million dollars sounds like a lot of money but isn't. You aren't rich. This is about bare minimum for surviving with a middle class lifestyle in the United States and other first world countries. You can't live like a tourist. It's true that many places overseas are cheaper. But many aren't (including much of Europe, Japan, Australia, New Zealand, etc.). And the ones that aren't may surprise you. And you also may find that some of the things that you personally want or need to buy are expensive elsewhere. Dabble first and commit slowly; be sure first. Include rarer things like travel in your expenses. Long term, there will be currency rate worries overseas. If you move permanently, you should certainly move your bank accounts there relatively soon (perhaps keep part of one in the US for emergencies that may bring you back). And move your investments as well. Your return may actually improve, although some of that is likely to be eaten up by inflation. A 10% return in a country with 12% inflation is a negative real return. Try to balance your investments by where your money gets spent. If you are eating imported food, put some of the investment in the place from which you are importing. That way, if exchange rates push your food costs up, they will likely increase your investments at the same time. If you are buying stuff online from US vendors and having it shipped to you, keep some of your investments in the US for the same reason. Make currency fluctuations work with you rather than against you. I don't know what your circumstances are in terms of health. If you can work, you probably should. Given twenty years, your million could grow to enough to live off securely. As is, you would be in trouble with another stock market crash. You'd have to live off the bank account money while you waited for your stocks and bonds to recover.\"",
"title": ""
},
{
"docid": "dcc20635328d993b4b926dcedd1615d7",
"text": "I started out thinking like you but I quickly realised this was a bad approach. You are a team, aren't you? Are you equals or is one of you an inferior of lower value? I think you'll generate more shared happiness by acting as a team of equals. I'd pool your resources and share them as equals. I'd open a joint account and pay both your incomes directly into it. I'd pay all household bills from this. If you feel the need, have separate personal savings accounts paid into (equally) from the joint account. Major assets should be in joint names. This usually means the house. In my experience, it is a good idea to each have a small amount of individual savings that you jointly agree each can spend without consulting the other, even if the other thinks it is a shocking waste of money. However, spending of joint savings should only be by mutual agreement. I would stop worrying about who is bringing in the most income. Are you planning to gestate your children? How much is that worth? - My advice is to put all this aside, stop trying to track who adds what value to the joint venture and make it a partnership of equals where each contributes whatever they can. Suppose you fell ill and were unable to earn. Should you wife then retain all her income and keep you in poverty? I really believe life is simpler and happier without adding complex and stressful financial issues to the relationship. Of course, everyone is different. The main thing is to agree this between the two of you and be open to change and compromise.",
"title": ""
},
{
"docid": "cd7b2260cf22b2b28ded192e30046001",
"text": "\"I can only share with you my happened with my wife and I. First, and foremost, if you think you need to protect your assets for some reason then do so. Be open and honest about it. If we get a divorce, X stays with me, and Y stays with you. This seems silly, even when your doing it, but it's important. You can speak with a lawyer about this stuff as you need to, but get it in writing. Now I know this seems like planning for failure, but if you feel that foo is important to you, and you want to retain ownership of foo no mater what, then you have to do this step. It also works both ways. You can use, with some limitations, this to insulate your new family unit from your personal risks. For example, my business is mine. If we break up it stays mine. The income is shared, but the business is mine. This creates a barrier that if someone from 10 years ago sues my business, then my wife is protected from that. Keep in mind, different countries different rules. Next, and this is my advise. Give up on \"\"his and hers\"\" everything. It's just \"\"ours\"\". Together you make 5400€ decide how to spend 5400€ together. Pick your goals together. The pot is 5400€. End of line. It doesn't matter how much from one person or how much from another (unless your talking about mitigating losses from sick days or injuries or leave etc.). All that matters is that you make 5400€. Start your budgeting there. Next setup an equal allowance. That is money, set aside for non-sense reasons. I like to buy video games, my wife likes to buy books. This is not for vacation, or stuff together, but just little, tiny stuff you can do for your self, without asking \"\"permission\"\". The number should be small, and equal. Maybe 50€. Finally setup a budget. House Stuff 200€, Car stuff 400€. etc. etc. then it doesn't matter who bought the house stuff. You only have to coordinate so that you don't both buy house stuff. After some time (took us around 6 months) you will find out how this works and you can add on some rules. For example, I don't go to Best Buy alone. I will spend too much on \"\"house stuff\"\". My wife doesn't like to make the budget, so I handle that, then we go over it. Things like that.\"",
"title": ""
},
{
"docid": "1b410374cf170e730ad6a327bc8d22c8",
"text": "\"I often say \"\"don't let the tax tail wag the investing dog.\"\" I need to change that phrase a bit to \"\"don't let the tax tail wag the mortgage dog.\"\" Getting a tax deduction on a 4% mortgage basically results (assuming you already itemize) in an effective 3% rate mortgage. The best way to avoid tax is save pretax in a 401(k), IRA, or both. You are 57, and been through a tough time. You're helping your daughter through college, which is an expense, and admirable kindness to her. But all this means you won't start saving $10K/yr until age 59. The last thing I'd do is buy a bigger home and take on a mortgage. Unless you told me the house you want has an in-law apartment that will bring in a high rent, or can be used to rent rooms and be a money maker, I'd not do this. No matter how small the mortgage, your property tax bill will go up, and there would be a mortgage to pay. Even a tiny mortgage payment, $400, is nearly half that $10K potential annual savings plan. Your income is now excellent. Can your wife do anything to get hers to a higher level? In your situation, I'd save every cent I can.\"",
"title": ""
},
{
"docid": "3c4db89839bf06a8c684257ea8615b86",
"text": "\"Since the other answers have covered mutual funds/ETFs/stocks/combination, some other alternatives I like - though like everything else, they involve risk: Example of how these other \"\"saving methods\"\" can be quite effective: about ten years ago, I bought a 25lb bag of quinoa at $19 a bag. At the same company, quinoa is now over $132 for a 25lb bag (590%+ increase vs. the S&P 500s 73%+ increase over the same time period). Who knows what it will cost in ten years. Either way, working directly with the farmers, or planting it myself, may become even cheaper in the future, plus learning how to keep and store the seeds for the next season.\"",
"title": ""
},
{
"docid": "5fb65a985b04ebc0e224cab352a24540",
"text": "\"It is my opinion that part of having a successful long-term relationship is being committed to the other person's success and well-being. This commitment is a form of investment in and of itself. The returns are typically non-monetary, so it's important to understand what money actually is. Money is a token people exchange for favors. If I go to a deli and ask for a sandwich. I give them tokens for the favor of having received a sandwich. The people at the deli then exchange those tokens for other favors, and that's the entire economy: people doing favors for other people in exchange for tokens that represent more favors. Sometimes being invested in your spouse is giving them a back rub when they've had a hard day. The investment pays off when you have a hard day and they give you a back rub. Sometimes being invested in your spouse is taking them to a masseuse for a professional massage. The investment pays off when they get two tickets to that thing you love. At the small scale it's easy to mostly ignore minor monetary discrepancies. At the large scale (which I think £50k is plenty large enough given your listed net worth) it becomes harder to tell if the opportunity cost will be worth making that investment. It pretty much comes down to: Will the quality-of-life improvements from that investment be better than the quality-of-life improvements you receive from investing that money elsewhere? As far as answering your actual question of: How should I proceed? There isn't a one-size fits all answer to this. It comes down to decisions you have to make, such as: * in theory it's easy to say that everyone should be able to trust their spouse, but in practice there are a lot of people who are very bad at handling money. It can be worthwhile in some instances to keep your spouse at an arms length from your finances for their own good, such as if your spouse has a gambling addiction. With all of that said, it sounds like you're living in a £1.5m house rent-free. How much of an opportunity cost is that to your wife? Has she been freely investing in your well-being with no explicit expectation of being repaid? This can be your chance to provide a return on her investment. If it were me, I'd make the investment in my spouse, and consider it \"\"rent\"\" while enjoying the improvements to my quality of life that come with it.\"",
"title": ""
},
{
"docid": "785dfcde9313891b41c7a84d465d469b",
"text": "I feel there are two types of answer: One: the financial. Suck all the emotion out of the situation, and treat the two individuals as individuals. If that works for the two of you, fantastic. Two: the philosophical. You're married, it's a union, so unify the funds. If that works for you, fantastic. Personally, my partner and I do the latter. The idea of separate pots and separate accounts and one mixed fund etc makes no sense to us. But that's us. The first step for you in deciding on an approach is to know yourselves as people - and everything else will follow.",
"title": ""
},
{
"docid": "c6dba7fc748b0af0e57a483470ae31a5",
"text": "\"It's hard to know what to tell you without knowing income, age, marital status, etc., so I'll give some general comments. ETFs come in all varieties. Some have more volatility than others. It all depends on what types of assets are in the fund. Right now it's tough to outpace inflation in an investment that's \"\"safe\"\" (CDs for example). Online savings accounts pay 1% or less now. Invest only in what you understand, and only after everything else is taken care of (debt, living expenses, college costs, etc.) A bank account is just fine. You're investing in US Dollars. Accumulating cash isn't a bad thing to do.\"",
"title": ""
},
{
"docid": "aef86ebe299a964f826a4562492623f3",
"text": "\"The suggestions towards retirement and emergency savings outlined by the other posters are absolute must-dos. The donations towards charitable causes are also extremely valuable considerations. If you are concerned about your savings, consider making some goals. If you plan on staying in an area long term (at least five years), consider beginning to save for a down payment to own a home. A rent-versus-buy calculator can help you figure out how long you'd need to stay in an area to make owning a home cost effective, but five years is usually a minimum to cover closing costs and such compared to rending. Other goals that might be worthwhile are a fully funded new car fund for when you need new wheels, the ability to take a longer or nicer vacation, a future wedding if you'd like to get married some day, and so on. Think of your savings not as a slush fund of money sitting around doing nothing, but as the seed of something worthwhile. Yes, you will only be young once. However being young does not mean you have to be Carrie from Sex in the City buying extremely expensive designer shoes or live like a rapper on Cribs. Dave Ramsey is attributed as saying something like, \"\"Live like no one else so that you can live like no one else.\"\" Many people in their 30s and 40s are struggling under mortgages, perhaps long-left-over student loan debt, credit card debt, auto loans, and not enough retirement savings because they had \"\"fun\"\" while they were young. Do you have any remaining debt? Pay it off early instead of saving so much. Perhaps you'll find that you prefer to hit that age with a fully paid off home and car, savings for your future goals (kids' college tuitions, early retirement, etc.). Maybe you want to be able to afford some land or a place in a very high cost of living city. In other words - now is the time to set your dreams and allocate your spare cash towards them. Life's only going to get more expensive if you choose to have a family, so save what you can as early as possible.\"",
"title": ""
},
{
"docid": "62f2fd8bbc997d337c69e0060df6684c",
"text": "\"You are a teacher with income. Presumably, between you and your spouse-to-be, more than $5500. That's all that matters. Unless, of course you make \"\"too much money\"\" (i.e. $184K or over). That's another story. The actual deposit can be from any source. The example we often give is that a teenager with legitimate income can have a Roth, up to the income or $5500, whichever is lower, funded by gifts from a parent, or from savings. They don't need to turn over the money they made. The money you are getting is a gift, and it's your money to do what you wish.\"",
"title": ""
},
{
"docid": "5d5e4e1d4f9c4dd063b662a9cce9501c",
"text": "\"If you ask ten different couples what they do, depending on a variety of factors, you'll get anywhere between two and ten different answers. One personal finance blogger that I read swears by the fact that he and his wife keep their finances totally separate. His wife has her own retirement account, he has his. His wife has her own checking and savings, he has his. They pay fifty-fifty for expenses and each buy their own \"\"toys\"\" from their own accounts. He views this as valuable for allowing them to have their own personal finance styles, as his wife is a very conservative investor and he is more generous. My spouse and I have mostly combined finances, and view all of our money as joint (even though there are a smattering of accounts between us with just one name on them as holdovers from before we were married). Almost all of our purchasing decisions except regular groceries are joint. I couldn't imagine it any other way. It leaves us both comfortable with our financial situation and forces us to be on the same page with regards to our lifestyle decisions. There's also the ideological view that since we believe marriage united us, we try to live that out. That's just us, though. We don't want to force it on others. Some couples find a balance between joint accounts and his and her fun money stashes. You might find yet another arrangement that works for you, such as the one you already described. What's going to be important is that you realize that all couples have the same six basic arguments, finances being one of them. The trick is in how you disagree. If you can respectfully and thoughtfully discuss your finances together to find the way that has the least friction for you, you're doing well. Some amount of friction is not just normal, it's almost guaranteed.\"",
"title": ""
},
{
"docid": "3ae51aec7487f3a23fc9eb5b91d38c5e",
"text": "\"The $1K in funds are by default your emergency fund. If absolutely necessary, emergency funds may need to come from debt, a credit capacity, focus on building credit to leverage lower rates for living expenses eventually needed. Profitable organizations & proprietors, borrow at a lower cost of capital than their return. Join your local credit union, you're welcome to join mine online, the current rates for the first $500 in both your checking and savings is 4.07%, it's currently the fourth largest in the U.S. by assets. You may join as a \"\"family member\"\" to me (Karl Erdmann), not sure what their definition of \"\"family\"\" is, I'd be happy to trace our ancestry if need be or consider other options. Their current incentive program, like many institutions have often, will give you $100 for going through the hassle to join and establish a checking and savings. Some institutions, such as this credit union, have a lower threshold to risk, applicants may be turned down for an account if there is any negative history or a low credit score, shooting for a score of 600 before applying seems safest. The web services, as you mentioned, have significantly improved the layman's ability to cost effectively invest funds and provide liquidity. Robinhood currently seems to be providing the most affordable access to the market. It goes without saying, stay objective with your trust of any platform, as you may have noticed, there is a detailed explanation of how Robinhood makes their money on this stack exchange community, they are largely backed by venture funding, hopefully the organization is able to maintain a low enough overhead to keep the organization sustainable in the long run. The services that power this service such as Plaid, seem promising and underrated, but i digress. The platform gives access for users to learn how investing works, it seems safest to plan a diversified portfolio utilizing a mix of securities,such as low Beta stocks or \"\"blue chip\"\" companies with clear dividend policies. One intriguing feature, if you invest in equities is casting votes on decisions in shareholder meetings. Another popular investment asset class that is less liquid and perhaps something to work toward is real estate. Google the economist \"\"Matthew Rognlie\"\" for his work on income equality on this type of investment. There are many incentives for first time homeowners, saving up for a down payment is the first step. Consider adding to your portfolio a Real Estate Investment Trust (REITs) to gain a market position. Another noteworthy approach to this idea is an investment commercial property cooperative organization, currently the first and only one is called NorthEast Investment Cooperative, one stock of class A is $1K. If you are interested and plan to focus on equities, consider dropping into your college's Accounting Capstone course to learn more about the the details of fundamental and technical analysis of an organization. The complexities of investing involve cyclical risk, macro and micro economic factors, understanding financial statements and their notes, cash flow forecasting - discounting, market timing, and a host of other details Wikipedia is much more helpful at detailing. It's safe to assume initial investment decisions by unsophisticated investors are mostly whimsical, and likely will only add up to learning opportunities, however risk is inherit in all things, including sitting on cash that pays a price of inflation. A promising mindset in long term investments are in organizations that focus on conscious business practices. Another way to think of investing is that you are already somewhat of a \"\"sophisticated investor\"\" and could beat the market by what you know given your background, catching wind of certain information first, or acting on a new trends or technology quickly. Move carefully with any perhaps biased \"\"bullish\"\" or \"\"bearish\"\" mindset. Thinking independently is helpful, constantly becoming familiar with different ideas from professions in a diverse set of backgrounds, and simulating decisions in portfolio's. Here is an extremely limited set of authors and outlets that may have ideas worth digging more into, MIT Tech Reviews (Informative), Bloomberg TV (it's free, informative), John Mackey (businessman), Paul Mason (provocative journalist). Google finance is a simple and free go-to application, use the \"\"cost basis\"\" feature for \"\"paper\"\" or real trades, it's easy to import transactions from a .csv. This seems sufficient to start off with. Enjoy the journey, aim for real value with your resources.\"",
"title": ""
}
] |
fiqa
|
315c5c36755dad663d377fbefe581cca
|
What to know before purchasing Individual Bonds?
|
[
{
"docid": "f6ac2bcc59fee8f3220b9dbae3fc484a",
"text": "\"A few points that I would note: Call options - Could the bond be called away by the issuer? This is something to note as some bonds may end up not being as good as one thought because of this option that gets used. Tax considerations - Are you going for corporate, Treasury, or municipals? Different ones may have different tax consequences to note if you aren't holding the bond in a tax-advantaged account,e.g. Roth IRA, IRA or 401k. Convertible or not? - Some bonds are known as \"\"convertibles\"\" since the bond comes with an option on the stock that can be worth considering for some kinds of bonds. Inflation protection - Some bonds like TIPS or series I savings bonds can have inflation protection built into them that can also be worth understanding. In the case of TIPS, there are principal adjustments while the savings bond will have a change in its interest rate. Default risk - Some of the higher yield bonds may have an issuer go under which is another way one may end up with equity in a company rather than getting their money back. On the other side, for some municipals one could have the risk of the bond not quite being as good as one thought like some Detroit bonds that may end up in a different result given their bankruptcy but there are also revenue bonds that may not meet their target for another situation that may arise. Some bonds may be insured though this requires a bit more research to know the credit rating of the insurer. As for the latter question, what if interest rates rise and your bond's value drops considerably? Do you hold it until maturity or do you try to sell it and get something that has a higher yield based on face value?\"",
"title": ""
}
] |
[
{
"docid": "3f8851d458841a55b140337c80cb1702",
"text": "\"The first thing that it is important to note here is that the examples you have given are not individual bond prices. This is what is called the \"\"generic\"\" bond price data, in effect a idealised bond with the indicated maturity period. You can see individual bond prices on the UK Debt Management Office website. The meaning of the various attributes (price, yield, coupon) remains the same, but there may be no such bond to trade in the market. So let's take the example of an actual UK Gilt, say the \"\"4.25% Treasury Gilt 2019\"\". The UK Debt Management Office currently lists this bond as having a maturity date of 07-Mar-2019 and a price of GBP 116.27. This means that you will pay 116.27 to purchase a bond with a nominal value of GBP 100.00. Here, the \"\"nominal price\"\" is the price that HM Treasury will buy the bond back on the maturity date. Note that the title of the bond indicates a \"\"nominal\"\" yield of 4.25%. This is called the coupon, so here the coupon is 4.25%. In other words, the treasury will pay GBP 4.25 annually for each bond with a nominal value of GBP 100.00. Since you will now be paying a price of GBP 116.27 to purchase this bond in the market today, this means that you will be paying 116.27 to earn the nominal annual interest of 4.25. This equates to a 3.656% yield, where 3.656% = 4.25/116.27. It is very important to understand that the yield is not the whole story. In particular, since the bond has a nominal value of GBP100, this means that as the maturity date approaches the market price of the bond will approach the nominal price of 100. In this case, this means that you will witness a loss of capital over the period that you hold the bond. If you hold the bond until maturity, then you will lose GBP 16.27 for each nominal GBP100 bond you hold. When this capital loss is netted off the interest recieved, you get what is called the gross redemption yield. In this case, the gross redemption yield is given as approximately 0.75% per annum. NB. The data table you have included clearly has errors in the pricing of the 3 month, 6 month, and 12 month generics.\"",
"title": ""
},
{
"docid": "96cec02c99cd390afdf4af6154c169c1",
"text": "\"So after you've learned about bonds, you might find yourself learning about interest rates. You might, in fact, discover that there's no such thing as a \"\"correct\"\" interest rate, or even a true \"\"market\"\" interest rate. PS We already had the housing bubble. It has come, and gone. What *new* bubble are you referring to?\"",
"title": ""
},
{
"docid": "09dbc013a2c9df18506d12e2075fb6a0",
"text": "As you are 14, you cannot legally buy premium bonds yourself. Your parents could buy them and hold them for you, mind you. That said, I'm not a fan of premium bonds. They are a rather weird combination of a savings account and a lottery. Most likely, you'll receive far less than the standard interest rate you'd get from a savings account. Sure, they may pay off, but they probably won't. What I would suggest, given that you expect to need the money in five years, is simply place it in a savings account. Shop around for the best interest rate you can find. This article lists interest rates, though you'll want to confirm that it is up to date. There are other investment options. You could invest in a mutual fund which tracks the stock market or the bond market, for example. On average, that'll give you a higher rate of return. But there's more risk, and as you want the money in five years, I'd be uncomfortable recommending that at this time. If you were looking at investing for 25 years, that'd be a no-brainer. But it's a bit risky for 5 years. Your investment may go down, and that's not something I'd have been happy with when I was 14. There may be some other options specific to the UK which I don't know about. If so, hopefully someone else will chime in.",
"title": ""
},
{
"docid": "d6f5042870c1a4aa59de7578bdc238f6",
"text": "> The purpose of buying these bonds was not to step in due to the absence of a market. Rather, the purpose was to deliberately bid up the price of these bonds (ahead of the market), causing their price to rise and yields (interest rates) to drop. There are some important things you need to understand about bubbles and how they form. When interest rates are artificially low and down payments aren't required for many loans, do you agree this is a recipe for a bubble?",
"title": ""
},
{
"docid": "c1abc18736c5ab5314bf49da7f5ab4ea",
"text": "Without providing direct investment advice, I can tell you that bond most assuredly are not recession-proof. All investments have risk, and each recession will impact asset-classes slightly differently. Before getting started, BONDS are LOANS. You are loaning money. Don't ever think of them as anything but that. Bonds/Loans have two chief risks: default risk and inflation risk. Default risk is the most obvious risk. This is when the person to whom you are loaning, does not pay back. In a recession, this can easily happen if the debtor is a company, and the company goes bankrupt in the recessionary environment. Inflation risk is a more subtle risk, and occurs when the (fixed) interest rate on your loan yields less than the inflation rate. This causes the 'real' value of your investment to depreciate over time. The second risk is most pronounced when the bonds that you own are government bonds, and the recession causes the government to be unable to pay back its debts. In these circumstances, the government may print more money to pay back its creditors, generating inflation.",
"title": ""
},
{
"docid": "f4b2fc93da9a9d7f5c1bc8869a4c706f",
"text": "For most people, you don't want individual bonds. Unless you are investing very significant amounts of money, you are best off with bond funds (or ETFs). Here in Canada, I chose TDB909, a mutual fund which seeks to roughly track the DEX Universe Bond index. See the Canadian Couch Potato's recommended funds. Now, you live in the U.S. so would most likely want to look at a similar bond fund tracking U.S. bonds. You won't care much about Canadian bonds. In fact, you probably don't want to consider foreign bonds at all, due to currency risk. Most recommendations say you want to stick to your home country for your bond investments. Some people suggest investing in junk bonds, as these are likely to pay a higher rate of return, though with an increased risk of default. You could also do fancy stuff with bond maturities, too. But in general, if you are just looking at an 80/20 split, if you are just looking for fairly simple investments, you really shouldn't. Go for a bond fund that just mirrors a big, low-risk bond index in your home country. I mean, that's the implication when someone recommends a 60/40 split or an 80/20 split. Should you go with a bond mutual fund or with a bond ETF? That's a separate question, and the answer will likely be the same as for stock mutual funds vs stock ETFs, so I'll mostly ignore the question and just say stick with mutual funds unless you are investing at least $50,000 in bonds.",
"title": ""
},
{
"docid": "478cdde040cedfb6e01af7f6e8296744",
"text": "I looked into the investopedia one (all their videos are mazing), but that detail just was not clear to me, it also makes be wonder, if a country issues bonds to finance itself, what happens at maturity when literally millions of them need to be paid? The income needs to have grown to that level or it defaults? Wouldn't all the countries default if that was the case, or are bonds being issued to being able to pay maturity of older bonds already? (I'm freaking myself out by realizing this)",
"title": ""
},
{
"docid": "1856f12fa004f6ee1b1d9889a4827b0d",
"text": "Bonds by themselves aren't recession proof. No investment is, and when a major crash (c.f. 2008) occurs, all investments will be to some extent at risk. However, bonds add a level of diversification to your investment portfolio that can make it much more stable even during downturns. Bonds do not move identically to the stock market, and so many times investing in bonds will be more profitable when the stock market is slumping. Investing some of your investment funds in bonds is safer, because that diversification allows you to have some earnings from that portion of your investment when the market is going down. It also allows you to do something called rebalancing. This is when you have target allocation proportions for your portfolio; say 60% stock 40% bond. Then, periodically look at your actual portfolio proportions. Say the market is way up - then your actual proportions might be 70% stock 30% bond. You sell 10 percentage points of stocks, and buy 10 percentage points of bonds. This over time will be a successful strategy, because it tends to buy low and sell high. In addition to the value of diversification, some bonds will tend to be more stable (but earn less), in particular blue chip corporate bonds and government bonds from stable countries. If you're willing to only earn a few percent annually on a portion of your portfolio, that part will likely not fall much during downturns - and in fact may grow as money flees to safer investments - which in turn is good for you. If you're particularly worried about your portfolio's value in the short term, such as if you're looking at retiring soon, a decent proportion should be in this kind of safer bond to ensure it doesn't lose too much value. But of course this will slow your earnings, so if you're still far from retirement, you're better off leaving things in growth stocks and accepting the risk; odds are no matter who's in charge, there will be another crash or two of some size before you retire if you're in your 30s now. But when it's not crashing, the market earns you a pretty good return, and so it's worth the risk.",
"title": ""
},
{
"docid": "10bc3540ae3ca68042d92856ce19fd30",
"text": "What can you give them as security? 1. A fixed/floating charge over assets 2. Negative covenants/Non-subordination agreements 3. Real Mortgage 4. Chattel Mortgage 5. Personal or inter-business Guarantees Essentially a bond is just a debt agreement, it is when you sell standardised bonds over a market that regulation comes into it. Now I am from Australia, so I can't comment on US policies etc...",
"title": ""
},
{
"docid": "acd9a181cdb5204856ef8ff054d77951",
"text": "A bond fund has a 5% yield. You can take 1/.05 and think of it as a 20 P/E. I wouldn't, because no one else does, really. An individual bond has a coupon yield, and a YTM, yield to maturity. A bond fund or ETF usually won't have a maturity, only a yield.",
"title": ""
},
{
"docid": "94ca39ebe5195ff60e6057e66b8c62a6",
"text": "Since you seem to be interested in investing in individual stocks, this answer will address that. As for the general question of investing, the answer that @johnfx gave is just about as good as it gets. Investing in individual stocks is extremely risky and takes a LOT of work to do right. On top of the fairly obvious need to research a stock before you buy, there is the matter of keeping up with the stocks to know when you need to sell as well as myriad other facets of investing. Paid professionals spend all day, every day, doing this and they have a hard time beating an index fund. Unless you take the time to educate yourself and are willing to continually put in a good bit of effort, I would advise you to stay away from individual stocks and rely on mutual funds.",
"title": ""
},
{
"docid": "84e47b81c35727ec73c7b526568e29b0",
"text": "Buy a fund of bonds, there are plenty and are registered on your stockbroker account as 'funds' rather than shares. Otherwise, to the individual investor, they can be considered as the same thing. Funds (of bonds, rather than funds that contain property or shares or other investments) are often high yield, low volatility. You buy the fund, and let the manager work it for you. He buys bonds in accordance to the specification of the fund (ie some funds will say 'European only', or 'global high yield' etc) and he will buy and sell the bonds regularly. You never hold to maturity as this is handled for you - in many cases, the manager will be buying and selling bonds all the time in order to give you a stable fund that returns you a dividend. Private investors can buy bonds directly, but its not common. Should you do it? Up to you. Bonds return, the company issuing a corporate bond will do so at a fixed price with a fixed yield. At the end of the term, they return the principal. So a 20-year bond with a 5% yield will return someone who invests £10k, £500 a year and at the end of the 20 years will return the £10k. The corporate doesn't care who holds the bond, so you can happily sell it to someone else, probably for £10km give or take. People say to invest in bonds because they do not move much in value. In financially difficult times, this means bonds are more attractive to investors as they are a safe place to hold money while stocks drop, but in good times the opposite applies, no-one wants a fund returning 5% when they think they can get 20% growth from a stock.",
"title": ""
},
{
"docid": "8396ac0417d62417654544d160748d93",
"text": "Well the only way you can actually legally do a bond issuance is through a broker dealer. In order to register and actually sell the securities to outside investors, you need a registered representative at a registered broker dealer. This falls under blue sky laws. You LEGALLY have to have one. Also, why would you prefer to issue? I mean public debt offerings are massive undertakings (hence why I said unless no one is pitching you, why do it). For example, as a first time issuer, you would have to register with the SEC, every state you plan to issue in, submit historical AUDITED financials, comply with SOX and other accounting filings, bring in due diligence, go on roadshows, etc. This stuff costs A LOT of time and money. For example, since you've never issued before, if you're cooking the books and the bankers don't catch it they can be legally liable for fraud. Also, how much are you even trying to raise?",
"title": ""
},
{
"docid": "eb75d87bb9c96b01960de628a1a4bd1e",
"text": "\"Junk Bonds (aka High Yield bonds) are typically those bonds from issues with credit ratings below BBB-. Not all such companies are big risks. They are just less financially sound than other, higher rated, companies. If you are not comfortable doing the analysis yourself, you should consider investing in a mutual fund, ETF, or unit trust that invests in high yield bonds. You get access to \"\"better quality\"\" issues because a huge amount of the debt markets goes to the institutional channels, not to the retail markets. High yield (junk) bonds can make up a part of your portfolio, and are a good source of regular income. As always, you should diversify and not have everything you own in one asset class. There are no real rules of thumb for asset allocation -- it all depends on your risk tolerance, goals, time horizon, and needs. If you don't trust yourself to make wise decisions, consult with a professional whom you trust.\"",
"title": ""
},
{
"docid": "580b87fa9582f0ad27639ac85955d59a",
"text": "\"Looking at the list of bonds you listed, many of them are long dated. In short, in a rate rising environment (it's not like rates can go much lower in the foreseeable future), these bond prices will drop in general in addition to any company specific events occurred to these names, so be prepared for some paper losses. Just because a bond is rated highly by credit agencies like S&P or Moody's does not automatically mean their prices do not fluctuate. Yes, there is always a demand for highly rated bonds from pension funds, mutual funds, etc. because of their investment mandates. But I would suggest looking beyond credit ratings and yield, and look further into whether these bonds are secured/unsecured and if secured, by what. Keep in mind in recent financial crisis, prices of those CDOs/CLOs ended up plunging even though they were given AAA ratings by rating agencies because some were backed by housing properties that were over-valued and loans made to borrowers having difficulties to make repayments. Hence, these type of \"\"bonds\"\" have greater default risks and traded at huge discounts. Most of them are also callable, so you may not enjoy the seemingly high yield till their maturity date. Like others mentioned, buying bonds outright is usually a big ticket item. I would also suggest reviewing your cash liquidity and opportunity cost as oppose to investing in other asset classes and instruments.\"",
"title": ""
}
] |
fiqa
|
f5e71dec178721222bb40924fcf7f7f9
|
Invest in (say, index funds) vs spending all money on home?
|
[
{
"docid": "7ec624787c105617815d274c4cc520a0",
"text": "Rules of thumb? Sure - Put down 20% to pay no PMI. The mortgage payment (including property tax) should be no more than 28% of your gross monthly income. These two rules will certainly put a cap on the home price. If you have more than the 20% to put down on the house you like, stop right here. Don't put more down and don't buy a bigger house. Set that money aside for long term investing (i.e. retirement savings) or your emergency fund. You can always make extra payments and shorten the length of the mortgage, you just can't easily get it back. In my opinion, one is better off getting a home that's too small and paying the transaction costs to upsize 5-10 years later than to buy too big, and pay all the costs associated with the home for the time you are living there. The mortgage, property tax, maintenance, etc. The too-big house can really take it toll on your wallet.",
"title": ""
},
{
"docid": "631bc94058215d246ca94f6f20e91eb5",
"text": "The short answer is that it depends on the taxation laws in your country. The long answer is that there are usually tax avoidance mechanisms that you can use which may make it more economically feasible for you to go one way or the other. Consider the following: The long term average growth rate of the stock market in Australia is around 7%. The average interest on a mortgage is 4.75%. Assuming you have money left over from a 20% deposit, you have a few options. You could: 1) Put that money into an index fund for the long term, understanding that the market may not move for a decade, or even move downwards; 2) Dump that money straight into the mortgage; 3) Put that money in an offset account Option 1 will get you (over the course of 30-40 years) around 7% return. If and when that profit is realised it will be taxed at a minimum of half your marginal tax rate (probably around 20%, netting you around 5.25%) Option 2 will effectively earn you 4.75% pa tax free Option 3 will effectively earn you 4.75% pa tax free with the added bonus that the money is ready for you to draw upon on short notice. Of the three options, until you have a good 3+ months of living expenses covered, I'd go with the offset account every single time. Once you have a few months worth of living expenses covered, I would the adopt a policy of spreading your risk. In Australia, that would mean extra contributions to my Super (401k in the US) and possibly purchasing an investment property as well (once I had the capital to positively gear it). Of course, you should find out more about the tax laws in your country and do your own maths.",
"title": ""
},
{
"docid": "88890edbedd3979b6a8244e4a8df8b85",
"text": "\"Here in the UK, the rule of thumb is to keep a lot of equity in your home if you can. I assume here that you have a lot of savings you're considering using. If you only have say 10% of the house price you wouldn't actually have a lot of choice in the matter, the mortgage lender will penalise you heavily for low deposits. The practical minimum is 5%, but for most people a 95% mortgage is just silly (albeit not as silly as the 100% or greater mortgages you could get pre-2008), and you should take serious individual advice before considering it. According to Which, the average in the UK for first-time buyers is 20% (not the best source for that data I confess, but a convenient one). Above 20% is not at all unusual. You'll do an affordability calculation to figure out how much you can borrow, which isn't at all the same as how much you should borrow, but does get you started. Basically you, decide how much a month you can spend on mortgage payments. The calculation will let you put every penny into this if you choose to, but in practice you'll want some discretionary income so don't do that. decide the term of the mortgage. For a young first-time buyer in the UK I think you'd typically take a 25-year term and consider early repayment options rather than committing to a shorter term, but you don't have to. Mortgage lenders will offer shorter terms as long as you can afford the payments. decide how much you're putting into a deposit make subtractions for cost of moving (stamp duty if applicable, fees, removals aka \"\"people to lug your stuff\"\"). receive back a number which is the house price you can pay under these constraints (and of course a breakdown of what the mortgage principle would be, and the interest rate you'll pay). This step requires access to lender information, since their rates depend on personal details, deposit percentage, phase of the moon, etc. Our mortgage advisor did multiple runs of the calculation for us for different scenarios, since we hadn't made up our minds entirely. Since you have not yet decided how much deposit to make, you can use multiple calculations to see the effect of different deposits you might make, up to a limit of your total savings. Putting up more deposit both increases the amount you can borrow for a given monthly payment (since mortgage rates are lower when the loan is a lower proportion of house value), and of course increases the house price you can afford. So unless you're getting a very high return on your savings, £1 of deposit gets you somewhat more than £1 of house, and the calculation will tell you how much more. Once you've chosen the house you want, the matter is even simpler: do you prefer to put your savings in the house and borrow less and make lower payments, or prefer to put your savings elsewhere and borrow more and make higher payments but perhaps have some additional income from the savings. Assuming you maintain a contingency fund, a lower mortgage is generally considered a good investment in the UK, but you need to check what's right for you and compare it to other investments you could make. The issue is complicated by the fact that residential property prices are rising quite quickly in most areas of the UK, and have been for a long time, meaning that highly-leveraged property investment appears to be a really good idea. This leads to the imprudent, but tempting, conclusion that you should buy the biggest house you can possibly afford and watch its value rises. I do not endorse this advice personally, but it's certainly true that in a sharply rising house market it's easier to get away with buying a bigger house than you need, than it is to get away with it in a flat or falling market. As Stephen says, an offset mortgage is a no-brainer good idea if the rate is the same. Unfortunately in the UK, the rate isn't the same (or anyway, it wasn't a couple of years ago). Offset mortgages are especially good for those who make a lot of savings from income and for any reason don't want to commit all of those savings to a traditional mortgage payment. Good reasons for not wanting to do that include uncertainty about your future income and a desire to have the flexibility to actually spend some of it if you fancy :-)\"",
"title": ""
}
] |
[
{
"docid": "f1ce77cace7085d6fd06cd494c162242",
"text": "Let me add a few thoughts that have not been mentioned so far in the other answers. Note that for the decision of buying vs. renting a home i.e. for personal use, not for renting out there's a rule of thumb that if the price for buying is more than 20 year's (cold) rents it is considered rather expensive. I don't know how localized this rule of thumb is, but I know it for Germany which is apparently the OP's country, too. There are obviously differences between buying a house/flat for yourself and in order to rent it out. As others have said, maintenance is a major factor for house owners - and here a lot depends on how much of that you do yourself (i.e. do you have the possibility to trade working hours for costs - which is closely related to financial risk exposure, e.g. increasing income by cutting costs as you do maintenance work yourself if you loose your day-time job?). This plays a crucial role for landlords I know (they're all small-scale landlords, and most of them do put in substantial work themselves): I know quite a number of people who rent out flats in the house where they actually live. Some of the houses were built with flats and the owner lives in one of the flats, another rather typical setup is that people built their house in the way that a smaller flat can easily be separated and let once the kids moved out (note also that the legal situation for the landlord is easier in that special case). I also know someone who owns a house several 100 km away from where they live and they say they intentionally ask a rent somewhat below the market price for that (nice) kind of flat so that they have lots of applicants at the same time and tenants don't move out as finding a new tenant is lots of work and costly because of the distance. My personal conclusion from those points is that as an investment (i.e. not for immediate or future personal use) I'd say that the exact circumstances are very important: if you are (stably) based in a region where the buying-to-rental-price ratio is favorable, you have the necessary time and are able to do maintenance work yourself and there is a chance to buy a suitable house closeby then why not. If this is not the case, some other form of investing in real estate may be better. On the other hand, investing in further real estate closeby where you live in your own house means increased lump risk - you miss diversification into regions where the value of real estate may develop very differently. There is one important psychological point that may play a role with the observed relation between being rich and being landlord. First of all, remember that the median wealth (without pensions) for Germany is about 51 k€, and someone owning a morgage-free 150 k€ flat and nothing else is somewhere in the 7th decile of wealth. To put it the other way round: the question whether to invest 150 k€ into becoming a landlord is of practical relevance only for rich (in terms of wealth) people. Also, asking this question is typically only relevant for people who already own the home they live in as buying for personal use will typically have a better return than buying in order to rent. But already people who buy for personal use are on average wealthier (or at least on the track to become more wealthy in case of fresh home owners) than people who rent. This is attributed to personal characteristics and the fact that the downpayment of the mortgage enforces saving behaviour (which is typically kept up once the house is paid, and is anyways found to be more pronounced than for non-house-owners). In contrast, many people who decide never to buy a home fall short of their initial savings/investment plans (e.g. putting the 150 k€ into an ETF for the next 21 years) and in the end spend considerably more money - and this group of people rarely invests into directly becoming a landlord. Assuming that you can read German, here's a relevant newspaper article and a related press release.",
"title": ""
},
{
"docid": "74b3f1e58bda2b062d3ad816837fd262",
"text": "Certainly, paying off the mortgage is better than doing nothing with the money. But it gets interesting when you consider keeping the mortgage and investing the money. If the mortgage rate is 5% and you expect >5% returns from stocks or some other investment, then it might make sense to seek those higher returns. If you expect the same 5% return from stocks, keeping the mortgage and investing the money can still be more tax-efficient. Assuming a marginal tax rate of 30%, the real cost of mortgage interest (in terms of post-tax money) is 3.5%*. If your investment results in long-term capital gains taxed at 15%, the real rate of growth of your post-tax money would be 4.25%. So in post-tax terms, your rate of gain is greater than your rate of loss. On the other hand, paying off the mortgage is safer than investing borrowed money, so doing so might be more appropriate for the risk-averse. * I'm oversimplifying a bit by assuming the deduction doesn't change your marginal tax rate.",
"title": ""
},
{
"docid": "44aaaaed94c2fcc169b1218230d3f12f",
"text": "Keep in mind, this is a matter of preference, and the answers here are going to give you a look at the choices and the member's view on the positive/negative for each one. My opinion is to put 20% down (to avoid PMI) if the bank will lend you the full 80%. Then, buy the house, move in, and furnish it. Keep track of your spending for 2 years minimum. It's the anti-budget. Not a list of constraints you have for each category of spending, but a rear-view mirror of what you spend. This will help tell you if, in the new house, you are still saving well beyond that 401(k) and other retirement accounts, or dipping into that large reserve. At that point, start to think about where kids fit into your plans. People in million dollar homes tend to have child care that's 3-5x the cost the middle class has. (Disclosure - 10 years ago, our's cost $30K/year). Today, your rate will be about 4%, and federal marginal tax rate of 25%+, meaning a real cost of 3%. Just under the long term inflation rate, 3.2% over the last 100 years. I am 53, and for my childhood right through college, the daily passbook rate was 5%. Long term government debt is also at a record low level. This is the chart for 30 year bonds. I'd also suggest you get an understanding of the long term stock market return. Long term, 10%, but with periods as long as 10 years where the return can be negative. Once you are at that point, 2-3 years in the house, you can look at the pile of cash, and have 3 choices. We are in interesting times right now. For much of my life I'd have said the potential positive return wasn't worth the risk, but then the mortgage rate was well above 6-7%. Very different today.",
"title": ""
},
{
"docid": "2986506f97a9d44efebb9d02d2a580e9",
"text": "4) Beef up my emergency fund, make sure my 401(k) or IRA was fully funded, put the rest into investments. See many past answers. A house you are living in is not an investment. It is a purchase, just as rental is a purchase. Buying a house to rent out is starting a business. If you want to spend the ongoing time and effort and cash running a business, and if you can buy at the right time in the right place for the righr price, this can be a reasonable investment. If you aren't willing to suffer the pains of being a landlord, it's less attractive; you can hire someone to manage it for you but that cuts the income significantly. Starting a business: Remember that many, perhaps most, small businesses fail. If you really want to run a business it can be a good investment, again assuming you can buy at the right time/price/place and are willing and able to invest the time and effort and money to support the business. Nothing produces quick return with low risk.",
"title": ""
},
{
"docid": "8b7a6bdc360c99bedfb60ace81842d06",
"text": "A loan with modest interest is better than paying by cash if there are better alternatives for investment. For example, suppose you are buying a house. Consider two extremes: a) you pay the house entirely by cash, b) the entire buy is financed by the bank. Historically, real (subtracting inflation) house prices (at least in the U.S.) have not risen at all in the long run, and investing all of your own capital in this way may not be optimal. Notice that we are looking at a situation where one is buying a house and living in it in any case. Rent savings are equal in cases a) and b). If instead you were buying a house not for yourself, but as a separate investment for renting out, then you would receive rent. In the case a), the real return on your capital will be zero, whereas in case b), you can invest the cash in e.g. the stock market and get, on average, 7% (the stock market has yielded a 7% real return annually including dividends) annually minus the bank's interest rate. If the interest is lower than 7%, it may be profitable to take the loan. Of course, the final decision depends on your risk preferences.",
"title": ""
},
{
"docid": "699785d1cb3f24db24145681487e024e",
"text": "\"From what I've read, paying down your mortgage -- above and beyond what you'd normally pay -- is indeed an investment but a very poor form of investment. In other words, you could take that extra money you'd apply towards your mortgage and put it in something that has a much higher rate of return than a house. As an extreme example, consider: if I took $6k extra I would have paid toward my mortgage in a single year, and bought a nice performing stock, I could see returns of 2x or 3x. Now, that implies I know which stock to pick, etcetera.. I found a \"\"mortgage or investment\"\" calculator which could be of use as well: http://www.planningtips.com/cgi-bin/prepay_v_invest.pl (scroll to bottom to see the summary and whether or not prepay or invest wins for the numbers you plugged in)\"",
"title": ""
},
{
"docid": "c517ef7ba52c41d23492de2239036a19",
"text": "Investing in property hoping that it will gain value is usually foolish; real estate increases about 3% a year in the long run. Investing in property to rent is labor-intensive; you have to deal with tenants, and also have to take care of repairs. It's essentially getting a second job. I don't know what the word pension implies in Europe; in America, it's an employer-funded retirement plan separate from personally funded retirement. I'd invest in personally funded retirement well before buying real estate to rent, and diversify my money in that retirement plan widely if I was within 10-20 years of retirement.",
"title": ""
},
{
"docid": "f9e8f42cad8fe877bf8d85961940ffd8",
"text": "The big question is whether you will be flexible about when you'll get that house. The overall best investment (in terms of yielding a good risk/return ratio and requiring little effort) is a broad index fund (mutual or ETF), especially if you're contributing continuously and thereby take advantage of cost averaging. But the downside is that you have some volatility: during an economic downturn, your investment may be worth only half of what it's worth when the economy is booming. And of course it's very bad to have that happening just when you want to get your house. Then again, chances are that house prices will also go down in such times. If you want to avoid ever having to see the value of your investment go down, then you're pretty much stuck with things like your high-interest savings account (which sounds like a very good fit for your requirements.",
"title": ""
},
{
"docid": "516c2d122e4ea621f52e35fbf8647cce",
"text": "My figuring (and I'm not an expert here, but I think this is basic math) is: Let's say you had a windfall of $1000 extra dollars today that you could either: a. Use to pay down your mortgage b. Put into some kind of equity mutual fund Maybe you have 20 years left on your mortgage. So your return on investment with choice A is whatever your mortgage interest rate is, compounded monthly or daily. Interest rates are low now, but who knows what they'll be in the future. On the other hand, you should get more return out of an equity mutual fund investment, so I'd say B is your better choice, except: But that's also the other reason why I favour B over A. Let's say you lose your job a year from now. Your bank won't be too lenient with you paying your mortgage, even if you paid it off quicker than originally agreed. But if that money is in mutual funds, you have access to it, and it buys you time when you really need it. People might say that you can always get a second mortgage to get the equity out of it, but try getting a second mortgage when you've just lost your job.",
"title": ""
},
{
"docid": "5f1818e595b153a093011afb8863d5c1",
"text": "what other pieces of info should I consider If you don't have liquid case available for unexpected repairs, then you probably don't want to use this money for either option. The 7% return on the stocks is absolutely not guaranteed. There is a good amount of risk involved with any stock investment. Paying down the mortgage, by contrast, has a much lower risk. In the case of the mortgage, you know you'll get a 2.1% annual return until it adjusts, and then you can put some constraints on the return you'll get after it adjusts. In the case of stocks, it's reasonable to guess that it will return more than 2.1% annually if you hold it long enough. But there will be huge swings from month to month and from year to year. The sooner you need it, the more guaranteed you will want the return to be. If you have few or no stock (or bond)-like assets, then (nearly) all of your wealth is in your house, and that is independent of the remaining balance on your mortgage. If you are going to sell the house soon, then you will want to diversify your assets to protect you against a drop in home value. If you are going to stay in the house forever, then you will eventually need non-house assets to consume. Ultimately, neither option is inherently better; it really depends on what you need.",
"title": ""
},
{
"docid": "8dcbe5ddda15574ace112c0a790e58a5",
"text": "A lot of people on here will likely disagree with me and this opinion. In my opinion the answer lies in your own motives and intentions. If you'd like to be more cognizant of the market, I'd just dive in and buy a few companies you like. Many people will say you shouldn't pick your own stocks, you should buy an index fund, or this ETF or this much bonds, etc. You already have retirement savings, capital allocation is important there. You're talking about an account total around 10% of your annual salary, and assuming you have sufficient liquid emergency funds; there's a lot of non-monetary benefit to being more aware of the economy and the stock market. But if you find the house you're going to buy, you may have to liquidate this account at a time that's not ideal, possibly at a loss. If all you're after is a greater return on your savings than the paltry 0.05% (or whatever) the big deposit banks are paying, then a high yield savings account is the way I'd go, or a CD ladder. Yes, the market generally goes up but it doesn't ALWAYS go up. Get your money somewhere that it's inured and you can be certain how much you'll have tomorrow. Assuming a gain, the gain you'll see will PALE in comparison to the deposits you'll make. Deposits grow accounts. Consider these scenarios if you allocate $1,000 per month to this account. 1) Assuming an investment return of 5% you're talking about $330 return in the first year (not counting commissions or possible losses). 2) Assuming a high yield savings account at 1.25% you're talking about $80 in the first year. Also remember, both of these amounts would be taxable. I'll admit in the event of 5% return you'll have about four times the gain but you're talking about a difference of ~$250 on $12,000. Over three to five years the most significant contributor to the account, by far, will be your deposits. Anyway, as I'm sure you know this is not investment advice and you may lose money etc.",
"title": ""
},
{
"docid": "3da4efe6540dfd85d329d83f22974972",
"text": "\"With no numbers offered, it's not like we can tell you if it's a wise purchase. -- JoeTaxpayer We can, however, talk about the qualitative tradeoffs of renting vs owning. The major drawback which you won't hear enough about is risk. You will be putting a very large portion of your net worth in what is effectively a single asset. This is somewhat risky. What happens if the regional economy takes a hit, and you get laid off? Chances are you won't be the only one, and the value of your house will take a hit at the same time, a double-whammy. If you need to sell and move away for a job in another town, you will be taking a financial hit - that is, if you can sell and still cover your mortgage. You will definitely not be able to walk away and find a new cheap apartment to scrimp on expenses for a little while. Buying a house is putting down roots. On the other hand, you will be free from the opposite risk: rising rents. Once you've purchased the house, and as long as you're living in it, you don't ever need to worry about a local economic boom and a bunch of people moving into town and making more money than you, pushing up rents. (The San Francisco Bay Area is an example of where that has happened. Gentrification has its malcontents.) Most of the rest is a numbers game. Don't get fooled into thinking that you're \"\"throwing away\"\" money on renting - if you really want to, you can save money yourself, and invest a sum approximately equal to your down payment in the stock market, in some diversified mutual funds, and you will earn returns on that at a rate similar to what you would get by building equity in your home. (You won't earn outsized housing-bubble-of-2007 returns, but you shouldn't expect those in the housing market of today anyway.) Also, if you own, you have broad discretion over what you can do with the property. But you have to take care of the maintenance and stuff too.\"",
"title": ""
},
{
"docid": "2c4bc25e5ecf9f7dd4e2a49e2fe716ba",
"text": "\"To add to what other have stated, I recently just decided to purchase a home over renting some more, and I'll throw in some of my thoughts about my decision to buy. I closed a couple of weeks ago. Note that I live in Texas, and that I'm not knowledgeable in real estate other than what I learned from my experiences in the area when I am located. It depends on the market and location. You have to compare what renting will get you for the money vs what buying will get you. For me, buying seemed like a better deal overall when just comparing monthly payments. This is including insurance and taxes. You will need to stay at a house that you buy for at least 5-7 years. You first couple years of payments will go almost entirely towards interest. It takes a while to build up equity. If you can pay more towards a mortgage, do it. You need to have money in the bank already to close. The minimum down payment (at least in my area) is 3.5% for an FHA loan. If you put 20% down, you don't need to pay mortgage insurance, which is essentially throwing money away. You will also have add in closing costs. I ended up purchasing a new construction. My monthly payment went up from $1200 to $1600 (after taxes, insurance, etc.), but the house is bigger, newer, more energy efficient, much closer to my work, in a more expensive area, and in a market that is expected to go up in value. I had all of my closing costs (except for the deposit) taken care of by the lender and builder, so all of my closing costs I paid out of pocket went to the deposit (equity, or the \"\"bank\"\"). If I decide to move and need to sell, then I will get a lot (losing some to selling costs and interest) of the money I have put in to the house back out of it when I do sell, and I have the option to put that money towards another house. To sum it all up, I'm not paying a difference in monthly costs because I bought a house. I had my closing costs taking care of and just had to pay the deposit, which goes to equity. I will have to do maintenance myself, but I don't mind fixing what I can fix, and I have a builder's warranties on most things in the house. To really get a good idea of whether you should rent or buy, you need to talk to a Realtor and compare actual costs. It will be more expensive in the short term, but should save you money in the long term.\"",
"title": ""
},
{
"docid": "abeead7391f1ad7e527550a2bca32fd5",
"text": "\"For some people, it should be a top priority. For others, there are higher priorities. What it should be for you depends on a number of things, including your overall financial situation (both your current finances and how stable you expect them to be over time), your level of financial \"\"education\"\", the costs of your mortgage, the alternative investments available to you, your investing goals, and your tolerance for risk. Your #1 priority should be to ensure that your basic needs (including making the required monthly payment on your mortgage) are met, both now and in the near future, which includes paying off high-interest (i.e. credit card) debt and building up an emergency fund in a savings or money-market account or some other low-risk and liquid account. If you haven't done those things, do not pass Go, do not collect $200, and do not consider making advance payments on your mortgage. Mason Wheeler's statements that the bank can't take your house if you've paid it off are correct, but it's going to be a long time till you get there and they can take it if you're partway to paying it off early and then something bad happens to you and you start missing payments. (If you're not underwater, you should be able to get some of your money back by selling - possibly at a loss - before it gets to the point of foreclosure, but you'll still have to move, which can be costly and unappealing.) So make sure you've got what you need to handle your basic needs even if you hit a rough patch, and make sure you're not financing the paying off of your house by taking a loan from Visa at 27% annually. Once you've gotten through all of those more-important things, you finally get to decide what else to invest your extra money in. Different investments will provide different rewards, both financial and emotional (and Mason Wheeler has clearly demonstrated that he gets a strong emotional payoff from not having a mortgage, which may or may not be how you feel about it). On the financial side of any potential investment, you'll want to consider things like the expected rate of return, the risk it carries (both on its own and whether it balances out or unbalances the overall risk profile of all your investments in total), its expected costs (including its - and your - tax rate and any preferred tax treatment), and any other potential factors (such as an employer match on 401(k) contributions, which are basically free money to you). Then you weigh the pros and cons (financial and emotional) of each option against your imperfect forecast of what the future holds, take your best guess, and then keep adjusting as you go through life and things change. But I want to come back to one of the factors I mentioned in the first paragraph. Which options you should even be considering is in part influenced by the degree to which you understand your finances and the wide variety of options available to you as well as all the subtleties of how different things can make them more or less advantageous than one another. The fact that you're posting this question here indicates that you're still early in the process of learning those things, and although it's great that you're educating yourself on them (and keep doing it!), it means that you're probably not ready to worry about some of the things other posters have talked about, such as Cost of Capital and ROI. So keep reading blog posts and articles online (there's no shortage of them), and keep developing your understanding of the options available to you and their pros and cons, and wait to tackle the full suite of investment options till you fully understand them. However, there's still the question of what to do between now and then. Paying the mortgage down isn't an unreasonable thing for you to do for now, since it's a guaranteed rate of return that also provides some degree of emotional payoff. But I'd say the higher priority should be getting money into a tax-advantaged retirement account (a 401(k)/403(b)/IRA), because the tax-advantaged growth of those accounts makes their long-term return far greater than whatever you're paying on your mortgage, and they provide more benefit (tax-advantaged growth) the earlier you invest in them, so doing that now instead of paying off the house quicker is probably going to be better for you financially, even if it doesn't provide the emotional payoff. If your employer will match your contributions into that account, then it's a no-brainer, but it's probably still a better idea than the mortgage unless the emotional payoff is very very important to you or unless you're nearing retirement age (so the tax-free growth period is small). If you're not sure what to invest in, just choose something that's broad-market and low-cost (total-market index funds are a great choice), and you can diversify into other things as you gain more savvy as an investor; what matters more is that you start investing in something now, not exactly what it is. Disclaimer: I'm not a personal advisor, and this does not constitute investing advice. Understand your choices and make your own decisions.\"",
"title": ""
},
{
"docid": "2139d24685a800e9d6c9b24094764ec4",
"text": "I think there are a few facets to this, namely: Overall, I wouldn't concentrate on paying off the house if I didn't have any other money parked and invested, but I'd still try to get rid of the mortgage ASAP as it'll give you more money that you can invest, too. At the end of the day, if you save out paying $20k in interest, that's almost $20k you can invest. Yes, I realise there's a time component to this as well and you might well get a better return overall if you invested the $20k now that in 5 years' time. But I'd still rather pay off the house.",
"title": ""
}
] |
fiqa
|
a4b2487d04e44c160930f143f8592891
|
How useful is the PEG Ratio for large cap stocks?
|
[
{
"docid": "83ff91d25d43c5069739a553a5a028ad",
"text": "It is not so useful because you are applying it to large capital. Think about Theory of Investment Value. It says that you must find undervalued stocks with whatever ratios and metrics. Now think about the reality of a company. For example, if you are waiting KO (The Coca-Cola Company) to be undervalued for buying it, it might be a bad idea because KO is already an international well known company and KO sells its product almost everywhere...so there are not too many opportunities for growth. Even if KO ratios and metrics says it's a good time to buy because it's undervalued, people might not invest on it because KO doesn't have the same potential to grow as 10 years ago. The best chance to grow is demographics. You are better off either buying ETFs monthly for many years (10 minimum) OR find small-cap and mid-cap companies that have the potential to grow plus their ratios indicate they might be undervalued. If you want your investment to work remember this: stock price growth is nothing more than You might ask yourself. What is your investment profile? Agressive? Speculative? Income? Dividends? Capital preservation? If you want something not too risky: ETFs. And not waste too much time. If you want to get more returns, you have to take more risks: find small-cap and mid-companies that are worth. I hope I helped you!",
"title": ""
}
] |
[
{
"docid": "4331dfcd3dcdaffd04df712bb8c58514",
"text": "Well Company is a small assets company for example it has 450,000,000 shares outstanding and is currently traded at .002. Almost never has a bid price. Compare it to PI a relative company with 350 million marker cap brokers will buy your shares. This is why blue chip stock is so much better than small company because it is much more safer. You can in theory make millions with start up / small companies. You would you rather make stable medium risk investment than extremely high risk with high reward investment I only invest in medium risk mutual funds and with recent rallies I made 182,973 already in half year period.",
"title": ""
},
{
"docid": "be1b32a07b443f30339d679ae66b7750",
"text": "There are the EDHEC-risk indices based on similar hedge fund types but even then an IR would give you performance relative to the competition, which is not useful for most hf's as investors don't say I want to buy a global macro fund, vs a stat arb fund, investors say I want to pay a guy to give me more money! Most investors don't care how the OTHER funds did or where the market went, they want that NAV to go always up , which is why a modified sharpe is probably better.",
"title": ""
},
{
"docid": "e7b44d6fb01103d972318fdd1aa04c52",
"text": "\"You'll generally get a number close to market cap of a mature company if you divide profits (or more accurately its free cash flow to equity) by the cost of equity which is usually something like ~7%. The value is meant to represent the amount of cash you'd need to generate investment income off it matching the company you're looking at. Imagine it as asking \"\"How much money do I need to put into the bank so that my interest income would match the profits of the company I'm looking at\"\". Except replace the bank with the market and other forms of investments that generate higher returns of course and that value would be lower.\"",
"title": ""
},
{
"docid": "81ec14fc701de02e845c914aa6aa8ca4",
"text": "No, this is quite wrong. Almost all hedge funds (and all hedge fund investors) use Sharpe as a *primary* measure of performance. The fact that they don't consider themselves risk-free has no bearing on the issue (that's a bizarre line of reasoning - you're saying Sharpe is only relevant for assets that consider themselves risk-free?). And as AlphaPortfolio rightly points out, most funds have no explicit benchmark and they are usually paid for performance over zero. I've never seen a hedge fund use a benchmark relative information ratio - for starters, what benchmark would you measure a CB arb fund against? Or market neutral quant? Or global macro? Same for CTAs...",
"title": ""
},
{
"docid": "a8f4d0b823ec45f1f14ee70df1183374",
"text": "It sounds to me like you may not be defining fundamental investing very well, which is why it may seem like it doesn't matter. Fundamental investing means valuing a stock based on your estimate of its future profitability (and thus cash flows and dividends). One way to do this is to look at the multiples you have described. But multiples are inherently backward-looking so for firms with good growth prospects, they can be very poor estimates of future profitability. When you see a firm with ratios way out of whack with other firms, you can conclude that the market thinks that firm has a lot of future growth possibilities. That's all. It could be that the market is overestimating that growth, but you would need more information in order to conclude that. We call Warren Buffet a fundamental investor because he tends to think the market has made a mistake and overvalued many firms with crazy ratios. That may be in many cases, but it doesn't necessarily mean those investors are not using fundamental analysis to come up with their valuations. Fundamental investing is still very much relevant and is probably the primary determinant of stock prices. It's just that fundamental investing encompasses estimating things like future growth and innovation, which is a lot more than just looking at the ratios you have described.",
"title": ""
},
{
"docid": "0cc8c705118c1a33d31241664c06f9e3",
"text": "I would think there would be heavy overlap between companies that do well and market cap. You're not going to get to largest market cap without being well managed, or at least in the top percentile. After all, in a normal distribution, the badly managed firms go out of business or never get large.",
"title": ""
},
{
"docid": "4d14c004981443285c0e14072fc0a322",
"text": "The biggest benefit to having a larger portfolio is relatively reduced transaction costs. If you buy a $830 share of Google at a broker with a $10 commission, the commission is 1.2% of your buy price. If you then sell it for $860, that's another 1.1% gone to commission. Another way to look at it is, of your $30 ($860 - $830) gain you've given up $20 to transaction costs, or 66.67% of the proceeds of your trade went to transaction costs. Now assume you traded 10 shares of Google. Your buy was $8,300 and you sold for $8,600. Your gain is $300 and you spent the same $20 to transact the buy and sell. Now you've only given up 6% of your proceeds ($20 divided by your $300 gain). You could also scale this up to 100 shares or even 1,000 shares. Generally, dividend reinvestment are done with no transaction cost. So you periodically get to bolster your position without losing more to transaction costs. For retail investors transaction costs can be meaningful. When you're wielding a $5,000,000 pot of money you can make your trades on a larger scale giving up relatively less to transaction costs.",
"title": ""
},
{
"docid": "bf6022bc93687e36f52a30b212aea8d4",
"text": "I think it's safe to say that Apple cannot grow in value in the next 20 years as fast as it did in the prior 20. It rose 100 fold to a current 730B valuation. 73 trillion dollars is nearly half the value of all wealth in the world. Unfortunately, for every Apple, there are dozens of small companies that don't survive. Long term it appears the smaller cap stocks should beat large ones over the very long term if only for the fact that large companies can't maintain that level of growth indefinitely. A non-tech example - Coke has a 174B market cap with 46B in annual sales. A small beverage company can have $10M in sales, and grow those sales 20-25%/year for 2 decades before hitting even $1B in sales. When you have zero percent of the pie, it's possible to grow your business at a fast pace those first years.",
"title": ""
},
{
"docid": "ef598db00822ea62dc1ec99fb6904b32",
"text": "Thanks. Just to clarify I am looking for a more value-neutral answer in terms of things like Sharpe ratios. I think it's an oversimplification to say that on average you lose money because of put options - even if they expire uselessly 90% of the time, they still have some expected payoff that kicks in 10% of the time, and if the price is less than the expected payoff you will earn money in the long term by investing in put options (I am sure you know this as a PhD student I just wanted to get it out there.)I guess more formally my question would be are there studies on whether options prices correspond well to the diversification benefits they offer from an MPT point of view.",
"title": ""
},
{
"docid": "ce4221079abce3405a8b34b151d4a4d5",
"text": "The Sharpe ratio is, perhaps, the method you are looking for. That said, not really sure beta is a meaningful metric, as there are plenty of safe bets to be made on volatile stocks (and, conversely, unsafe bets to be made on non-volatile ones).",
"title": ""
},
{
"docid": "c26abce4a4b994467b349f12d67579d0",
"text": "\"Below is just a little information on this topic from my small unique book \"\"The small stock trader\"\": The most significant non-company-specific factor affecting stock price is the market sentiment, while the most significant company-specific factor is the earning power of the company. Perhaps it would be safe to say that technical analysis is more related to psychology/emotions, while fundamental analysis is more related to reason – that is why it is said that fundamental analysis tells you what to trade and technical analysis tells you when to trade. Thus, many stock traders use technical analysis as a timing tool for their entry and exit points. Technical analysis is more suitable for short-term trading and works best with large caps, for stock prices of large caps are more correlated with the general market, while small caps are more affected by company-specific news and speculation…: Perhaps small stock traders should not waste a lot of time on fundamental analysis; avoid overanalyzing the financial position, market position, and management of the focus companies. It is difficult to make wise trading decisions based only on fundamental analysis (company-specific news accounts for only about 25 percent of stock price fluctuations). There are only a few important figures and ratios to look at, such as: perhaps also: Furthermore, single ratios and figures do not tell much, so it is wise to use a few ratios and figures in combination. You should look at their trends and also compare them with the company’s main competitors and the industry average. Preferably, you want to see trend improvements in these above-mentioned figures and ratios, or at least some stability when the times are tough. Despite all the exotic names found in technical analysis, simply put, it is the study of supply and demand for the stock, in order to predict and follow the trend. Many stock traders claim stock price just represents the current supply and demand for that stock and moves to the greater side of the forces of supply and demand. If you focus on a few simple small caps, perhaps you should just use the basic principles of technical analysis, such as: I have no doubt that there are different ways to make money in the stock market. Some may succeed purely on the basis of technical analysis, some purely due to fundamental analysis, and others from a combination of these two like most of the great stock traders have done (Jesse Livermore, Bernard Baruch, Gerald Loeb, Nicolas Darvas, William O’Neil, and Steven Cohen). It is just a matter of finding out what best fits your personality. I hope the above little information from my small unique book was a little helpful! Mika (author of \"\"The small stock trader\"\")\"",
"title": ""
},
{
"docid": "af7535b950b00daa65f3e587fcb3e827",
"text": "Most of the “recommendations” are just total market allocations. Within domestic stocks, the performance rotates. Sometimes large cap outperform, sometimes small cap outperform. You can see the chart here (examine year by year): https://www.google.com/finance?chdnp=1&chdd=1&chds=1&chdv=1&chvs=maximized&chdeh=0&chfdeh=0&chdet=1428692400000&chddm=99646&chls=IntervalBasedLine&cmpto=NYSEARCA:VO;NYSEARCA:VB&cmptdms=0;0&q=NYSEARCA:VV&ntsp=0&ei=_sIqVbHYB4HDrgGA-oGoDA Conventional wisdom is to buy the entire market. If large cap currently make up 80% of the market, you would allocate 80% of domestic stocks to large cap. Same case with International Stocks (Developed). If Japan and UK make up the largest market internationally, then so be it. Similar case with domestic bonds, it is usually total bond market allocation in the beginning. Then there is the question of when you want to withdraw the money. If you are withdrawing in a couple years, you do not want to expose too much to currency risks, thus you would allocate less to international markets. If you are investing for retirement, you will get the total world market. Then there is the question of risk tolerance. Bonds are somewhat negatively correlated with Stocks. When stock dips by 5% in a month, bonds might go up by 2%. Under normal circumstances they both go upward. Bond/Stock allocation ratio is by age I’m sure you knew that already. Then there is the case of Modern portfolio theory. There will be slight adjustments to the ETF weights if it is found that adjusting them would give a smaller portfolio variance, while sacrificing small gains. You can try it yourself using Excel solver. There is a strategy called Sector Rotation. Google it and you will find examples of overweighting the winners periodically. It is difficult to time the rotation, but Healthcare has somehow consistently outperformed. Nonetheless, those “recommendations” you mentioned are likely to be market allocations again. The “Robo-advisors” list out every asset allocation in detail to make you feel overwhelmed and resort to using their service. In extreme cases, they can even break down the holdings to 2/3/4 digit Standard Industrial Classification codes, or break down the bond duration etc. Some “Robo-advisors” would suggest you as many ETF as possible to increase trade commissions (if it isn’t commission free). For example, suggesting you to buy VB, VO, VV instead a VTI.",
"title": ""
},
{
"docid": "32a43dc6ba76140884e09956a9c7bee8",
"text": "There is some convergence, but the chart seems to indicate that 5 star funds end up on the upper end of average (3 stars) whereas 1 star funds end up on the lower end of average (1.9 stars) over the long term. I would have thought that the stars would be completely useless as forward looking indicators, but they seem to have been slightly useful?",
"title": ""
},
{
"docid": "99a35d8a21693b605106176989414fed",
"text": "This is Rob Bennett, the fellow who developed the Valuation-Informed Indexing strategy and the fellow who is discussed in the comment above. The facts stated in that comment are accurate -- I went to a zero stock allocation in the Summer of 1996 because of my belief in Robert Shiller's research showing that valuations affect long-term returns. The conclusion stated, that I have said that I do not myself follow the strategy, is of course silly. If I believe in it, why wouldn't I follow it? It's true that this is a long-term strategy. That's by design. I see that as a benefit, not a bad thing. It's certainly true that VII presumes that the Efficient Market Theory is invalid. If I thought that the market were efficient, I would endorse Buy-and-Hold. All of the conventional investing advice of recent decades follows logically from a belief in the Efficient Market Theory. The only problem I have with that advice is that Shiller's research discredits the Efficient Market Theory. There is no one stock allocation that everyone following a VII strategy should adopt any more than there is any one stock allocation that everyone following a Buy-and-Hold strategy should adopt. My personal circumstances have called for a zero stock allocation. But I generally recommend that the typical middle-class investor go with a 20 percent stock allocation even at times when stock prices are insanely high. You have to make adjustments for your personal financial circumstances. It is certainly fair to say that it is strange that stock prices have remained insanely high for so long. What people are missing is that we have never before had claims that Buy-and-Hold strategies are supported by academic research. Those claims caused the biggest bull market in history and it will take some time for the widespread belief in such claims to diminish. We are in the process of seeing that happen today. The good news is that, once there is a consensus that Buy-and-Hold can never work, we will likely have the greatest period of economic growth in U.S. history. The power of academic research has been used to support Buy-and-Hold for decades now because of the widespread belief that the market is efficient. Turn that around and investors will possess a stronger belief in the need to practice long-term market timing than they have ever possessed before. In that sort of environment, both bull markets and bear markets become logical impossibilities. Emotional extremes in one direction beget emotional extremes in the other direction. The stock market has been more emotional in the past 16 years than it has ever been in any earlier time (this is evidenced by the wild P/E10 numbers that have applied for that entire time-period). Now that we are seeing the losses that follow from investing in highly emotional ways, we may see rational strategies becoming exceptionally popular for an exceptionally long period of time. I certainly hope so! The comment above that this will not work for individual stocks is correct. This works only for those investing in indexes. The academic research shows that there has never yet in 140 years of data been a time when Valuation-Informed Indexing has not provided far higher long-term returns at greatly diminished risk. But VII is not a strategy designed for stock pickers. There is no reason to believe that it would work for stock pickers. Thanks much for giving this new investing strategy some thought and consideration and for inviting comments that help investors to understand both points of view about it. Rob",
"title": ""
},
{
"docid": "c28eb69add00010b45511f54bf8ebe0e",
"text": "\"Maria, there are a few questions I think you must consider when considering this problem. Do fundamental or technical strategies provide meaningful information? Are the signals they produce actionable? In my experience, and many quantitative traders will probably say similar things, technical analysis is unlikely to provide anything meaningful. Of course you may find phenomena when looking back on data and a particular indicator, but this is often after the fact. One cannot action-ably trade these observations. On the other hand, it does seem that fundamentals can play a crucial role in the overall (typically long run) dynamics of stock movement. Here are two examples, Technical: suppose we follow stock X and buy every time the price crosses above the 30 day moving average. There is one obvious issue with this strategy - why does this signal have significance? If the method is designed arbitrarily then the answer is that it does not have significance. Moreover, much of the research supports that stocks move close to a geometric brownian motion with jumps. This supports the implication that the system is meaningless - if the probability of up or down is always close to 50/50 then why would an average based on the price be predictive? Fundamental: Suppose we buy stocks with the best P/E ratios (defined by some cutoff). This makes sense from a logical perspective and may have some long run merit. However, there is always a chance that an internal blowup or some macro event creates a large loss. A blended approach: for sake of balance perhaps we consider fundamentals as a good long-term indication of growth (what quants might call drift). We then restrict ourselves to equities in a particular index - say the S&P500. We compare the growth of these stocks vs. their P/E ratios and possibly do some regression. A natural strategy would be to sell those which have exceeded the expected return given the P/E ratio and buy those which have underperformed. Since all equities we are considering are in the same index, they are most likely somewhat correlated (especially when traded in baskets). If we sell 10 equities that are deemed \"\"too high\"\" and buy 10 which are \"\"too low\"\" we will be taking a neutral position and betting on convergence of the spread to the market average growth. We have this constructed a hedged position using a fundamental metric (and some helpful statistics). This method can be categorized as a type of index arbitrage and is done (roughly) in a similar fashion. If you dig through some data (yahoo finance is great) over the past 5 years on just the S&P500 I'm sure you'll find plenty of signals (and perhaps profitable if you calibrate with specific numbers). Sorry for the long and rambling style but I wanted to hit a few key points and show a clever methods of using fundamentals.\"",
"title": ""
}
] |
fiqa
|
066a3de7c7bb4873a5da9b895f2501ca
|
Is a credit card deposit a normal part of the vehicle purchase process
|
[
{
"docid": "2c91d469adaf30cb4392e92342f5ad50",
"text": "\"Unfortunately, it's not unusual enough. If you're looking for a popular car and the dealer wants to make sure they aren't holding onto inventory without a guarantee for sale, then it's a not completely unreasonable request. You'll want to make sure that the deposit is on credit card, not cash or check, so you can dispute if an issue arises. Really though, most dealers don't do this, requiring a deposit, pre sale is usually one of those hardball negotiating tactics where the dealer wrangles you into a deal, even if they don't have a good deal to make. Dealers may tell you that you can't get your deposit back, even if they don't have the car you agreed on or the deal they agreed to. You do have a right for your deposit back if you haven't completed the transaction, but it can be difficult if they don't want to give you your money back. The dealer doesn't ever \"\"not know if they have that specific vehicle in stock\"\". The dealer keeps comprehensive searchable records for every vehicle, it's good for sales and it's required for tax records. Even when they didn't use computers for all this, the entire inventory is a log book or phone call away. In my opinion, I would never exchange anything with the dealer without a car actually attached to the deal. I'd put down a deposit on a car transfer if I were handed a VIN and verified that it had all the exact options that we agreed upon, and even then I'd be very cautious about the condition.\"",
"title": ""
}
] |
[
{
"docid": "5732591aae33f59231af5cb46932ab57",
"text": "A credit card is not a bank account. It is, essentially, a contract to extend a line of credit on an as needed basis through a process accepted by the provider(purchase through approved vender, cash advance, etc). There is no mechanism for the bank to accept or hold a deposit. While most card issuers will simple retain the money for a period of up 30-60 days to apply toward transactions, I have had a card that actually charged a fee for having a negative balance in excess of $10 for more than 30 days(the fee was $10/month). So no you can not DEPOSIT money on any credit card. You need an account that accepts deposits to make a deposit.",
"title": ""
},
{
"docid": "9e814218015e61c473d66135a4cfd495",
"text": "I agree with the deposit part. But if you are buying a new car, the loan term should meet the warranty term. Assuming you know you won't exceed the mileage limits, it's a car with only maintainence costs and the repayment cost at that point.",
"title": ""
},
{
"docid": "013e7bbdcf2f60f8c14ed6aeb7d90a95",
"text": "\"This is most likely protecting Square's relationship with Visa/Mastercard/AMEX/etc. Credit card companies typically charge their customers a much higher interest rate with no grace period on cash advances (withdrawals made from an ATM using a credit card). If you use Square to generate something that looks like a \"\"merchandise transaction\"\" but instead just hand over a wad of banknotes, you're forcing the credit card company to apply their cheaper \"\"purchases\"\" interest rate on the transaction, plus award any applicable cashback offers†, etc. Square would absolutely profit off of this, but since it would result in less revenue for the partner credit card companies, that would quickly sour the relationship and could even result in them terminating their agreements with Square altogether. † This is the kind of activity they are trying to prevent: 1. Bill yourself $5,000 for \"\"merchandise\"\", but instead give yourself cash. 2. Earn 1.5% cashback ($75). 3. Use $4,925 of the cash and a $75 statement credit to pay your credit card statement. 4. Pocket the difference. 5. Repeat. Note, the fees involved probably negate any potential gain shown in this example, but I'm sure with enough creative thinking someone would figure out a way to game the system if it wasn't expressly forbidden in the terms of service\"",
"title": ""
},
{
"docid": "f3e72bdefbd10e71f4e78095e8889f4b",
"text": "It will depend on credit they are offering you during the period being covered. A gas station locks up what they expect is the maximum transaction for most people. When the prices of gas spikes some people have the pump turn off before the tank is filled, therefore they need to use a 2nd card to complete the purchase. Before you arrive at a hotel they lockup the cost of one night in the hotel, that way they still sell the room for one night if you never show. While you are there they lockup the cost of what you could owe them. This would include the cost of the room, and average room service or bar service. For a car rental, it would be based on the risk they perceive. They don't want to try and collect against a card you gave them when you reserved the car, or when you picked up the car, only to find that you have gone over the limit. Some online systems will let you see what is pending against your card. Others could provide that information to you over the phone.",
"title": ""
},
{
"docid": "ca6825a395b2bee9c84e0f46ececc662",
"text": "\"At one point in my life I sold cars and from what I saw, three things stick out. Unless the other dealership was in the same network, eg ABC Ford of City A, and ABC Ford of City B, they never had possession of that truck. So, no REAL application for a loan could be sent in to a bank, just a letter of intent, if one was sent at all. With a letter of intent, a soft pull is done, most likely by the dealership, where they then attached that score to the LOI that the bank has an automated program send back an automatic decline, an officer review reply, or a tentative approval (eg tier 0,1,2...8). The tentative approval is just that, Tentative. Sometime after a lender has a loan officer look at the full application, something prompts them to change their offer. They have internal guidelines, but lets say an app is right at the line for 2-3 of the things they look at, they chose to lower the credit tier or decline the app. The dealership then goes back and looks at what other offers they had. Let's say they had a Chase offer at 3.25% and a CapOne for 5.25% they would say you're approved at 3.5%, they make their money on the .25%. But after Chase looks into the app and sees that, let's say you have been on the job for actually 11 months and not 1 year, and you said you made $50,000, but your 1040 shows $48,200, and you have moved 6 times in the last 5 years. They comeback and say no he is not a tier 2 but a tier 3 @ 5.5%. They switch to CapOne and say your rate has in fact gone up to 5.5%. Ultimately you never had a loan to start with - only a letter of intent. The other thing could be that the dealership finance manager looked at your credit score and guessed they would offer 3.5%, when they sent in the LOI it came back higher than he thought. Or he was BSing you, so if you price shopped while they looked for a truck you wouldn't get far. They didn't find that Truck, or it was not what they thought it would be. If a dealership sees a truck in inventory at another dealer they call and ask if it's available, if they have it, and it's not being used as a demo for a sales manager, they agree to send them something else for the trade, a car, or truck or whatever. A transfer driver of some sort hops in that trade, drives the 30 minutes - 6 hours away and comes back so you can sign the Real Application, TODAY! while you're excited about your new truck and willing to do whatever you need to do to get it. Because they said it would take 2-5 days to \"\"Ship\"\" it tells me it wasn't available. Time Kills Deals, and dealerships know this: they want to sign you TODAY! Some dealerships want \"\"honest\"\" money or a deposit to go get the truck, but reality is that that is a trick to test you to make sure you are going to follow through after they spend the gas and add mileage to a car. But if it takes 2 days+, The truck isn't out there, or the dealer doesn't have a vehicle the other dealership wants back, or no other dealership likes dealing with them. The only way it would take that long is if you were looking for something very rare, an odd color in an unusual configuration. Like a top end model in a low selling color, or configuration you had to have that wouldn't sell well - like you wanted all the options on a car except a cigarette lighter, you get the idea. 99.99% of the time a good enough truck is available. Deposits are BS. They don't setup any kind of real contract, notice most of the time they want a check. Because holding on to a check is about as binding as making you wear a chicken suit to get a rebate. All it is, is a test to see if you will go through with signing the deal. As an example of why you don't let time pass on a car deal is shown in this. One time we had a couple want us to find a Cadillac Escalade Hybrid in red with every available option. Total cost was about $85-90k. Only two new Red Escalade hybrids were for sale in the country at the time, one was in New York, and the other was in San Fransisco, and our dealership is in Texas, and neither was wanting to trade with us, so we ended up having to buy the SUV from one of the other dealerships inventory. That is a very rare thing to do by the way. We took a 25% down payment, around $20,000, in a check. We flew a driver to wherever the SUV was and then drove it back to Texas about 4 days later. The couple came back and hated the color, they would not take the SUV. The General Manager was pissed, he spent around $1000 just to bring the thing to Texas, not to mention he had to buy the thing. The couple walked and there was nothing the sales manager, GM, or salesman could do. We had not been able to deliver the car, and ultimately the dealership ate the loss, but it shows that deposits are useless. You can't sell something you don't own, and dealerships know it. Long story short, you can't claim a damage you never experienced. Not having something happen that you wanted to have happen is not a damage because you can't show a real economic loss. One other thing, When you sign the paperwork that you thought was an application, it was an authorization for them to pull your credit and the fine print at the bottom is boiler plate defense against getting sued for everything imaginable. Ours took up about half of one page and all of the back of the second page. I know dealing with car dealerships is hard, working at them is just as hard, and I'm sorry that you had to deal with it, however the simplest and smoothest car deals are the ones where you pay full price.\"",
"title": ""
},
{
"docid": "39bcb0e40e9aeb3a52b16e3a23dae31e",
"text": "\"Retail purchases are purchases made at retail, i.e.: as a consumer/individual customer. That would include any \"\"standard\"\" individual expenditure, but may exclude wholesale sales or purchases from merchants who identify themselves as service providers to businesses. Specifics of these limitations really depend on your card issuer, and you should inquire with the customer service at what are their specific eligibility requirements. As an example, here in the US many cards give high cash-back for gasoline purchases, but only at \"\"retail\"\" locations. That excludes wholesale/club sellers like Costco, for example.\"",
"title": ""
},
{
"docid": "9aa425c9c92adc20f4795526b3aebf2a",
"text": "Very good answers as to how 0% loans are typically done. In addition, many are either tied to a specific large item purchase, or credit cards with a no interest period. On credit card transactions the bank is getting a fee from the retailer, who in turn is giving you a hidden charge to cover that fee. In the case of a large purchase item like a car, the retailer is again quite likely paying a fee to cover what would be that interest, something they are willing to do to make the sale. They will typically be less prone to deal as low a price in negotiation if you were not making that deal, or at times they may offer either a rebate or special low to zero finance rates, but you don't get both.",
"title": ""
},
{
"docid": "230bf99815c0f1b4b3d8aea5c08f2c0f",
"text": "The car dealership doesn't care where you get the cash; they care about it becoming their money immediately and with no risk or complications. Any loan or other arrangements you make to raise the cash is Your Problem, not theirs, unless you arrange the loan through them.",
"title": ""
},
{
"docid": "707710b1f52ebd3e174ecd48ca16ad0c",
"text": "\"I have never had a credit card and have been able to function perfectly well without one for 30 years. I borrowed money twice, once for a school loan that was countersigned, and once for my mortgage. In both cases my application was accepted. You only need to have \"\"good credit\"\" if you want to borrow money. Credit scores are usually only relevant for people with irregular income or a past history of delinquency. Assuming the debtor has no history of delinquency, the only thing the bank really cares about is the income level of the applicant. In the old days it could be difficult to rent a car without a credit car and this was the only major problem for me before about 2010. Usually I would have to make a cash deposit of $400 or something like that before a rental agency would rent me a car. This is no longer a problem and I never get asked for a deposit anymore to rent cars. Other than car rentals, I never had a problem not having a credit card.\"",
"title": ""
},
{
"docid": "c03c89b9c8a7b1f7dc27747751e1c316",
"text": "\"This is completely disgusting, utterly unethical, deeply objectionable, and yes, it is almost certainly illegal. The Federal Trade Commission has indeed filed suit, halted ads, etc in a number of cases - but these likely only represent a tiny percentage of all cases. This doesn't make what the car dealer's do ok, but don't expect the SWAT team to bust some heads any time soon - which is kind of sad, but let's deal with the details. Let's see what the Federal Trade Commission has to say in their article, Are Car Ads Taking You for a Ride? Deceptive Car Ads Here are some claims that may be deceptive — and why: Vehicles are available at a specific low price or for a specific discount What may be missing: The low price is after a downpayment, often thousands of dollars, plus other fees, like taxes, licensing and document fees, on approved credit. Other pitches: The discount is only for a pricey, fully-loaded model; or the reduced price or discount offered might depend on qualifications like the buyer being a recent college graduate or having an account at a particular bank. “Only $99/Month” What may be missing: The advertised payments are temporary “teaser” payments. Payments for the rest of the loan term are much higher. A variation on this pitch: You will owe a balloon payment — usually thousands of dollars — at the end of the term. So both of these are what the FTC explicitly says are deceptive practices. Has the FTC taken action in cases similar to this? Yes, they have: “If auto dealers make advertising claims in headlines, they can’t take them away in fine print,” said Jessica Rich, Director of the FTC’s Bureau of Consumer Protection. “These actions show there is a financial cost for violating FTC orders.” In the case referenced above, the owners of a 20+ dealership chain was hit with about $250,000 in fines. If you think that's a tiny portion of the unethical gains they made from those ads in the time they were running, I'd say you were absolutely correct and that's little more than a \"\"cost of doing business\"\" for unscrupulous companies. But that's the state of the US nation at this time, and so we are left with \"\"caveat emptor\"\" as a guiding principle. What can you do about it? Competitors are technically allowed to file suit for deceptive business practices, so if you know any honest dealers in the area you can tip them off about it (try saying that out loud with a serious face). But even better, you can contact the FTC and file a formal complaint online. I wouldn't expect the world to change for your complaint, but even if it just generates a letter it may be enough to let a company know someone is watching - and if they are a big business, they might actually get into a little bit of trouble.\"",
"title": ""
},
{
"docid": "d48785f98d580c2f0bba55a4e048f87c",
"text": "\"You do not say what country you are in. This is an answer for readers in the UK. Most normal balance transfer deals are only for paying off other credit cards. However there are \"\"money transfer\"\" deals that will pay the money direct to your bank account. The deals aren't as good as balance transfer deals but they are often a competitive option compared to other types of borrowing. Another option depending on how much you need to borrow and your regular spending habits is to get a card with a \"\"0% for purchases\"\" deal and use that card for your regular shopping, then put the money you would have spent on your regular shopping towards the car.\"",
"title": ""
},
{
"docid": "b427ead79d6bc0ca641b104f8705fd3c",
"text": "I would presume this goes entirely through the credit card network rather than the banking network. I am guessing that it's essentially the same operation as if you had returned something purchased on a card to the store for credit, but I'm not sure whether it really looks like a vendor credit to the network or if it is marked as a different type of transaction.",
"title": ""
},
{
"docid": "d2256b53f1ae824a23694655782ddbbd",
"text": "Basically you put down a deposit ($49-$200) and then you get a credit limit related to that card. I think it's got a 29% APR (might wanna double check that) and you have to have a checking account. Once you can show a good payment history with it and that you've had it for a little while it will open up more options with capital one and your credit in general. The one I have had an original credit limit of $300 and now has one of $700 after 7 months. Also, make sure you're going onto Credit Karma to get updates on your credit and see what they suggest you can work on.",
"title": ""
},
{
"docid": "69eacef6ab630c1a74ab135faf233369",
"text": "\"When processing credit/debit cards there is a choice made by the company on how they want to go about doing it. The options are Authorization/Capture and Sale. For online transactions that require the delivery of goods, companies are supposed to start by initially Authorizing the transaction. This signals your bank to mark the funds but it does not actually transfer them. Once the company is actually shipping the goods, they will send a Capture command that tells the bank to go ahead and transfer the funds. There can be a time delay between the two actions. 3 days is fairly common, but longer can certainly be seen. It normally takes a week for a gas station local to me to clear their transactions. The second one, a Sale is normally used for online transactions in which a service is immediately delivered or a Point of Sale transaction (buying something in person at a store). This action wraps up both an Authorization and Capture into a single step. Now, not all systems have the same requirements. It is actually fairly common for people who play online games to \"\"accidentally\"\" authorize funds to be transferred from their bank. Processing those refunds can be fairly expensive. However, if the company simply performs an Authorization and never issues a capture then it's as if the transaction never occurred and the costs involved to the company are much smaller (close to zero) I'd suspect they have a high degree of parents claiming their kids were never authorized to perform transactions or that fraud was involved. If this is the case then it would be in the company's interest to authorize the transaction, apply the credits to your account then wait a few days before actually capturing the funds from the bank. Depending upon the amount of time for the wait your bank might have silently rolled back the authorization. When it came time for the company to capture, then they'd just reissue it as a sale. I hope that makes sense. The point is, this is actually fairly common. Not just for games but for a whole host of areas in which fraud might exist (like getting gas).\"",
"title": ""
},
{
"docid": "8d71273268765dcba15255bd606fe944",
"text": "I had one of those banks that reordered transactions. Deposit cash first thing in the morning means you should have money in your account, right? Nah son. First they're going to take your balance at the beginning of the day, then they'll deduct all of the transactions you made that day, in order from largest to smallest. Did one of those put you in the red (ignoring the deposit)? Time to apply an overdraft fee to that one and every single one that comes after (in order of largest purchase to smallest, mind you). Only then would they apply your deposit, but, for many, that wasn't enough to cover the overdraft fees. I eventually received money from either a class action or a CFPB thing, but not enough to cover the amount they took in fees through that scheme. Thankfully, my deposits were large enough to at least cover the fees, so I didn't have those damnable daily fees on top of it all.",
"title": ""
}
] |
fiqa
|
cc9a068111a4d0bb85ec18eed8867da8
|
Receiving partial payment of overseas loan/company purchase?
|
[
{
"docid": "9abd5ec370b082cf841e039c527ee01a",
"text": "\"Is it equity, or debt? Understanding the exact nature of one's investment (equity vs. debt) is critical. When one invests money in a company (presumably incorporated or limited) by buying some or all of it — as opposed to lending money to the company — then one ends up owning equity (shares or stock) in the company. In such a situation, one is a shareholder — not a creditor. As a shareholder, one is not generally owed a money debt just by having acquired an ownership stake in the company. Shareholders with company equity generally don't get to treat money received from the company as repayment of a loan — unless they also made a loan to the company and the payment is designated by the company as a loan repayment. Rather, shareholders can receive cash from a company through one of the following sources: \"\"Loan repayment\"\" isn't one of those options; it's only an option if one made a loan in the first place. Anyway, each of those ways of receiving money based on one's shares in a company has distinct tax implications, not just for the shareholder but for the company as well. You should consult with a tax professional about the most effective way for you to repatriate money from your investment. Considering the company is established overseas, you may want to find somebody with the appropriate expertise.\"",
"title": ""
}
] |
[
{
"docid": "4286dcc9448a1e648b3e608b69aa08de",
"text": "I'm having a difficult time understanding how Chevron is avoiding taxes through party related loans. From my understanding, Chevron is providing loans to its Australian subsidiary at interest rates higher than market benchmarks. Does this shift profits from Australia to the U.S. and how does it help Chevron avoid taxes even though the corporate tax rate is higher in the U.S. than in Australia? Wouln't they want to be taxed at the the lower tax rate in Australia than in the U.S.? The more description the better, thanks! Edit: I think I understand that Chevron is giving out large loans with high interest rates to its subsidiary in Australia and I think the Australian subsidiary is converting its revenues to pay back the loan thus looking like profit in the corporation's books in Delaware. How is the money to pay back interest being raised if not from revenue? And how is that revenue not being taxed?",
"title": ""
},
{
"docid": "0a2e54e542bab264da2cf0c2dc3f09b7",
"text": "There are different options here. Either way, ensure that you have a paper trail of all your payments. When in doubt, speak to a lawyer, there are many who offer free consultations.",
"title": ""
},
{
"docid": "3861087c248e59a31cf6b40248e0cf0f",
"text": "I wanted to know that what if the remaining 40% of 60% in a LTV (Loan to Value ratio ) for buying a home is not paid but the borrower only wants to get 60% of the total amount of home loan that is being provided by lending company. Generally, A lending company {say Bank] will not part with their funds unless you first pay your portion of the funds. This is essentially to safeguard their interest. Let's say they pay the 60% [either to you or to the seller]; The title is still with Seller as full payment is not made. Now if you default, the Bank has no recourse against the seller [who still owns the title] and you are not paying. Some Banks may allow a schedule where the 60/40 may be applied to every payment made. This would be case to case basis. The deal could be done with only paying 20% in the beginning to the buyer and then I have to pay EMI's of $7451. The lending company is offering you 1.1 million assuming that you are paying 700K and the title will be yours. This would safeguard the Banks interest. Now if you default, the Bank can take possession of the house and recover the funds, a distress sale may be mean the house goes for less than 1.8 M; say for 1.4 million. The Bank would take back the 1.1 million plus interest and other closing costs. So if you can close the deal by paying only 20%, Bank would ask you to close this first and then lend you any money. This way if you are not able to pay the balance as per the deal agreement, you would be in loss and not the Bank.",
"title": ""
},
{
"docid": "5bb6d5c5b9d7ef1d33fcf8f7c07e2e5a",
"text": "For the first case to occur, you need to have an agreement in place with the bank, this is called overdraft protection. It's done at a cost, but cheaper than the potential series of bounce fees. I've never heard of the second choice, partial payment. That's not to say that it's not possible. The payment not made is called a bounced check, you and the recipient will be harmed a fee. I believe it's a felony to write bad checks. Good to not write a check unless there's a positive balance taking that check into account. As Dilip suggests, ask your bank.",
"title": ""
},
{
"docid": "dcfb68ac04560cc5455ac9725a74c2d2",
"text": "You could think of points 1 and 3 combined to be similar to buying shares and selling calls on a part of those shares. $50k is the net of the shares and calls sale (ie without point 3, the investor would pay more for the same stake). Look up convertible debt, and why it's used. It's basically used so that both parties get 'the best of both world's' from equity and debt financing. Who is he selling his share to in point 2 back to the business or to outside investors?",
"title": ""
},
{
"docid": "79f388d2574f818e5c8512003c48d607",
"text": "This really comes down to tax structuring (which I am not an expert on), for public companies the acquiror almost always pays for the cash to prevent any taxable drawdown of overseas accounts, dividend taxes suck, etc. For a private company, first the debt gets swept, then special dividend out - dividends received by the selling corporate entity benefit from a tax credit plus it reduces the selling price of the equity, reducing capital gains taxes.",
"title": ""
},
{
"docid": "cafd80031a7f88125c0fa2b02d28426a",
"text": "I work for an investment group in Central Asia in private equity/project investment. We use SPV and collateralized convertible loans to enter a project, we issue the loan at our own commercial bank. For each industry, the exact mechanisms vary. In most outcomes, we end up in control of some very important part of the business, and even if we have minority shares on paper, no decision is made w/o our approval. For example, we enter cosntruction projects via aquisiton of land and pledging the land as equity for an SPV, then renting it to the project operator. Basically, when you enter a business, be in control of the decisions there, or have significant leverage on the operations. Have your own operating professionals to run it. Profit.",
"title": ""
},
{
"docid": "cdede2d6ab1995907a3815ae89f6983d",
"text": "it sounds like you don't have experience in this, and neither does your *investor*; which is a recipe for disaster (pun intended). Your first order of business is to check whether your investor is an *Accredited Investor* (google to see what it means), if s/he's not, **walk away**. If s/he's an accredited investor, find a lawyer who can help you navigate this process, however these are the issues: * lawyers are expensive, and lawyers who have experience in these type of transactions are even more expensive * you actually need 2 lawyers, one for you and one for the investor * if neither of you have experience, there will be a lot more billable hours from the lawyers..... In principle this can go 3 ways: 1. The investors give you a loan, you pay them interests on a periodic basis, and then also principal. Items to be negotiated: interest rates, repayment schedule, collateral, personal guarantees. Highly unlikely this is what the investors wants. 2. The Investors get equity. items to be negotiated: your compensation, % of ownership, how profits are divided, how profits are paid; who gets to decide what. 3. A combination of 1 and 2 above, a *Convertible Note*. There's a lot more, too much for a Reddit post. There's not an easy ELI5.",
"title": ""
},
{
"docid": "e24b171d757ef9cc138878484923fbde",
"text": "\"You promised to pay the loan if he didn't. That was a commitment, and I recommend \"\"owning\"\" your choice and following it through to its conclusion, even if you never do that again. TLDR: You made a mistake: own it, keep your word, and embrace the lesson. Why? Because you keep your promises. (Nevermind that this is a rare time where your answer will be directly recorded, in your credit report.) This isn't moralism. I see this as a \"\"defining moment\"\" in a long game: 10 years down the road I'd like you to be wise, confident and unafraid in financial matters, with a healthy (if distant) relationship with our somewhat corrupt financial system. I know austerity stinks, but having a strong financial life will bring you a lot more money in the long run. Many are leaping to the conclusions that this is an \"\"EX-friend\"\" who did this deliberately. Don't assume this. For instance, it's quite possible your friend sold the (car?) at a dealer, who failed to pay off this note, or did and the lender botched the paperwork. And when the collector called, he told them that, thinking the collector would fix it, which they don't do. The point is, you don't know: your friend may be an innocent party here. Creditors generally don't report late payments to the credit bureaus until they're 30 days late. But as a co-signer, you're in a bad spot: you're liable for the payments, but they don't send you a bill. So when you hear about it, it's already nearly 30 days late. You don't get any extra grace period as a co-signer. So you need to make a payment right away to keep that from going 30 late, or if it's already 30 late, to keep it from going any later. If it is later determined that it was not necessary for you to make those payments, the lender should give them back to you. A less reputable lender may resist, and you may have to threaten small claims court, which is a great expense to them. Cheaper to pay you. They say France is the nation of love. They say America is the nation of commerce. So it's not surprising that here, people are quick to burn a lasting friendship over a temporary financial issue. Just saying, that isn't necessarily the right answer. I don't know about you, but my friends all have warts. Nobody's perfect. Financial issues are just another kind of wart. And financial life in America is hard, because we let commerce run amok. And because our obsession with it makes it a \"\"loaded\"\" issue and thus hard to talk about. Perhaps your friend is in trouble but the actual villain is a predatory lender. Point is, the friendship may be more important than this temporary adversity. The right answer may be to come together and figure out how to make it work. Yes, it's also possible he's a human leech who hops from person to person, charming them into cosigning for him. But to assume that right out of the gate is a bit silly. The first question I'd ask is \"\"where's the car?\"\" (If it's a car). Many lenders, especially those who loan to poor credit risks, put trackers in the car. They can tell you where it is, or at least, where it was last seen when the tracker stopped working. If that is a car dealer's lot, for instance, that would be very informative. Simply reaching out to the lender may get things moving, if there's just a paperwork issue behind this. Many people deal with life troubles by fleeing: they dread picking up the phone, they fearfully throw summons in the trash. This is a terrifying and miserable way to deal with such a situation. They learn nothing, and it's pure suffering. I prefer and recommend the opposite: turn into it, deal with it head-on, get ahead of it. Ask questions, google things, read, become an expert on the thing. Be the one calling the lender, not the other way round. This way it becomes a technical learning experience that's interesting and fun for you, and the lender is dreading your calls instead of the other way 'round. I've been sued. It sucked. But I took it on boldly, and and actually led the fight and strategy (albeit with counsel). And turned it around so he wound up paying my legal bills. HA! With that precious experience, I know exactly what to do... I don't fear being sued, or if absolutely necessary, suing. You might as well get the best financial education. You're paying the tuition!\"",
"title": ""
},
{
"docid": "85110d666ba177dfbde6ed4aae613120",
"text": "Yes, truckloads of cash. /s It's exactly the same as your example, when people say to pay for a car in cash, they don't meany physical bills, but rather the idea that you aren't getting a loan. In most acquisitions, the buyer will usually pay with their own stock, pay in cash, or a combination of both.",
"title": ""
},
{
"docid": "c09e0ca4cba8ddc88883306ee7d79eac",
"text": "\"This sounds like a FATCA issue. I will attempt to explain, but please confirm with your own research, as I am not a FATCA expert. If a foreign institution has made a policy decision not to accept US customers because of the Foreign Financial Institution (FFI) obligations under FATCA, then that will of course exclude you even if you are resident outside the US. The US government asserts the principle of universal tax jurisdiction over its citizens. The institution may have a publicly available FATCA policy statement or otherwise be covered in a new story, so you can confirm this is what has happened. Failing that, I would follow up and ask for clarification. You may be able to find an institution that accepts US citizens as investors. This requires some research, maybe some legwork. Renunciation of your citizenship is the most certain way to circumvent this issue, if you are prepared to take such a drastic step. Such a step would require thought and planning. Note that there would be an expatriation tax (\"\"exit tax\"\") that deems a disposition of all your assets (mark to market for all your assets) under IRC § 877. A less direct but far less extreme measure would be to use an intermediary, either one that has access or a foreign entity (i.e. non-US entity) that can gain access. A Non-Financial Foreign Entity (NFFE) is itself subject to withholding rules of FATCA, so it must withhold payments to you and any other US persons. But the investing institutions will not become FFIs by paying an NFFE; the obligation rests on the FFI. PWC Australia has a nice little writeup that explains some of the key terms and concepts of FATCA. Of course, the simplest solution is probably to use US institutions, where possible. Non-foreign entities do not have foreign obligations under FATCA.\"",
"title": ""
},
{
"docid": "090598b25ad86dc8c42f5c2246085762",
"text": "Another option, not yet discussed here, is to allow the loan to go into default and let the loaning agency repossess the property the loan was used for, after which they sell it and that sale should discharge some significant portion of the loan. Knowing where the friend and property is, you may be able to help them carry out the repossession by providing them information. Meanwhile, your credit will take a significant hit, but unless your name is on the deed/title of the property then you have little claim that the property is yours just because you're paying the loan. The contract you signed for the loan is not going to be easily bypassed with a lawsuit of any sort, so unless you can produce another contract between you and your friend it's unlikely that you can even sue them. In short, you have no claim to the property, but the loaning agency does - perhaps that's the only way to avoid paying most of the debt, but you do trade some of your credit for it. Hopefully you understand that what you loaned wasn't money, but your credit score and earning potential, and that you will be more careful who you choose to lend this to in the future.",
"title": ""
},
{
"docid": "ce91d9cddac8975a34b9c075c7566916",
"text": "My understanding of this would be that this is for the portion of the subsidiary which they do not own. In other words they record 100% of the subsidiary on their books and then make this entry to account for the % which another company has a minority interest in.",
"title": ""
},
{
"docid": "954c15a2906ae58f160e91c32a0a1c96",
"text": "I wouldn't get too caught up with this. Doesn't sound like this is even stock reconciliation, more ensuring the cash you've received for dividends & other corporate actions agrees to your expected entitlements and if not raising claims etc.",
"title": ""
},
{
"docid": "1b4e473675196ea73e28c4a46e3d696f",
"text": "You're lending the money to your business by paying for it directly. The company accounts must reflect a credit (the amount you lend to it) and a debit (what it then puts that loan towards). It's fairly normal for a small(ish) owner-driven company to reflect a large loan-account for the owners. For example, if you have a room at home dedicated for the business it is impractical to pay rent directly via the company. The rental agreement is probably in your name, you pay the rent, and you reconcile it with the company later. You could even charge your company (taxable) interest on this loan. When you draw down the loan from the company you reverse this, debit your loan account and credit the company (paying off the debt). As far as tracking that expenditure, simply handle those third-party invoices in the normal way and file them for reference.",
"title": ""
}
] |
fiqa
|
62128b25e4488428eac8c0c4b2aa1118
|
Why is the bid-ask spread considered a cost?
|
[
{
"docid": "8ac3f7737b4923500e318bf9888f039a",
"text": "Your assets are marked to market. If you buy at X, and the market is bidding at 99.9% * X then you've already lost 0.1%. This is a market value oriented way of looking at costs. You could always value your assets with mark to model, and maybe you do, but no one else will. Just because you think the stock is worth 2*X doesn't mean the rest of the world agrees, evidenced by the bid. You surely won't get any margin loans based upon mark to model. Your bankers won't be convinced of the valuation of your assets based upon mark to model. By strictly a market value oriented way of valuing assets, there is a bid/ask cost. more clarification Relative to littleadv, this is actually a good exposition between the differences between cash and accrual accounting. littleadv is focusing completely on the cash cost of the asset at the time of transaction and saying that there is no bid/ask cost. Through the lens of cash accounting, that is 100% correct. However, if one uses accrual accounting marking assets to market (as we all do with marketable assets like stocks, bonds, options, etc), there may be a bid/ask cost. At the time of transaction, the bids used to trade (one's own) are exhausted. According to exchange rules that are now practically uniform: the highest bid is given priority, and if two bids are bidding the exact same highest price then the oldest bid is given priority; therefore the oldest highest bid has been exhausted and removed at trade. At the time of transaction, the value of the asset cannot be one's own bid but the highest oldest bid leftover. If that highest oldest bid is lower than the price paid (even with liquid stocks this is usually the case) then one has accrued a bid/ask cost.",
"title": ""
},
{
"docid": "8d589182b01015240f2be382c8bbf3cf",
"text": "\"This is a misconception. One of the explanations is that if you buy at the ask price and want to sell it right away, you can only sell at the bid price. This is incorrect. There are no two separate bid and ask prices. The price you buy (your \"\"bid\"\") is the same price someone else sells (their \"\"sell\"\"). The same goes when you sell - the price you sell at is the price someone else buys. There's no spread with stocks. Emphasized it on purpose, because many people (especially those who gamble on stock exchange without knowing what they're doing) don't understand how the stock market works. On the stock exchange, the transaction price is the match between the bid price and the ask price. Thus, on any given transaction, bid always equals ask. There's no spread. There is spread with commodities (if you buy it directly, especially), contracts, mutual funds and other kinds of brokered transactions that go through a third party. The difference (spread) is that third party's fee for assuming part of the risk in the transaction, and is indeed added to your cost (indirectly, in the way you described). These transactions don't go directly between a seller and a buyer. For example, there's no buyer when you redeem some of your mutual fund - the fund pays you money. So the fund assumes certain risk, which is why there's a spread in the prices to invest and to redeem. Similarly with commodities: when you buy a gold bar - you buy it from a dealer, who needs to keep a stock. Thus, the dealer will not buy from you at the same price: there's a premium on sale and a discount on buy, which is a spread, to compensate the dealer for the risk of keeping a stock.\"",
"title": ""
},
{
"docid": "1e7a36e86be911f447e69350463b2591",
"text": "\"As an aside, on most securities with a spread of the minimum tick, there would be no bid ask spread if so-called \"\"locked markets\"\", where the price of the best bid on one exchange is equal to the price of the best ask on another, were permitted. It is currently forbidden for a security to have posted orders having the same price for both bid and ask even though they're on different exchanges. Option spreads would narrow as well as a result.\"",
"title": ""
}
] |
[
{
"docid": "ddebe31d71f26aa6b26955c1a29cd63a",
"text": "One difference is the bid/ask spread will cost you more in a lower cost stock than a higher cost one. Say you have two highly liquid stocks with tiny spreads: If you wanted to buy say $2,000 of stock: Now imagine these are almost identical ETFs tracking the S&P 500 index and extrapolate this to a trade of $2,000,000 and you can see there's some cost savings in the higher priced stock. As a practical example, recently a popular S&P 500 ETF (Vanguard's VOO) did a reverse split to help investors minimize this oft-missed cost.",
"title": ""
},
{
"docid": "7e2f458774d5b5fc425a19b677227c5c",
"text": "The most likely explanation is that the calls are being bought as a part of a spread trade. It doesn't have to be a super complex trade with a bunch of buys or sells. In fact, I bought a far out of the money option this morning in YHOO as a part of a simple vertical spread. Like you said, it wouldn't make sense and wouldn't be worth it to buy that option by itself.",
"title": ""
},
{
"docid": "fa69c931afc88305d93ce38b6a9dec08",
"text": "The point is that the bid and ask prices dictate what you can buy and sell at (at market, at least), and the difference between the two, or spread, contributes implicitly to your gains or losses. For example, say your $1 stock actually had a bid of $0.90 and an ask of $1.10; i.e. say that $1 was the last price. You would have to buy the stock at the ask price of $1.10, but now you can only sell that stock at the bid price of $0.90. Thus, you would need to make at least that $0.20 spread before you can make a profit.",
"title": ""
},
{
"docid": "b809640eecffebcc467fe3278d7eec43",
"text": "Real world example. AGNC = 21.79 time of post. Upcoming .22 cents ex-div Mar 27th Weekly options Mar 27th - $22 strike put has a bid ask spread of .22 / .53. If you can get that put for less than .21 after trade fee's, you'll have yourself a .22 cent arbitrage. Anything more than .21 per contract eats into your arbitrage. At .30 cents you'll only see .13 cent arbitrage. But still have tax liability on .22 cents. (maybe .05 cents tax due to REIT non-exempt dividend rates) So that .13 gain is down to a .08 cents after taxes.",
"title": ""
},
{
"docid": "ae4e14c0cb5e0aaa699d1711f8503bce",
"text": "This is copying my own answer to another question, but this is definitely relevant for you: A bid is an offer to buy something on an order book, so for example you may post an offer to buy one share, at $5. An ask is an offer to sell something on an order book, at a set price. For example you may post an offer to sell shares at $6. A trade happens when there are bids/asks that overlap each other, or are at the same price, so there is always a spread of at least one of the smallest currency unit the exchange allows. Betting that the price of an asset will go down, traditionally by borrowing some of that asset and then selling it, hoping to buy it back at a lower price and pocket the difference (minus interest). Going long, as you may have guessed, is the opposite of going short. Instead of betting that the price will go down, you buy shares in the hope that the price will go up. So, let's say as per your example you borrow 100 shares of company 'X', expecting the price of them to go down. You take your shares to the market and sell them - you make a market sell order (a market 'ask'). This matches against a bid and you receive a price of $5 per share. Now, let's pretend that you change your mind and you think the price is going to go up, you instantly regret your decision. In order to pay back the shares, you now need to buy back your shares as $6 - which is the price off the ask offers on the order book. Similarly, the same is true in the reverse if you are going long. Because of this spread, you have lost money. You sold at a low price and bought at a high price, meaning it costs you more money to repay your borrowed shares. So, when you are shorting you need the spread to be as tight as possible.",
"title": ""
},
{
"docid": "4ba855945cfa8e9af71a8036def16481",
"text": "\"Bull means the investor is betting on a rising market. Puts are a type of stock option where the seller of a put option promises to buy 100 shares of stock from the buyer of the put option at a pre-agreed price called the strike price on any day before expiration day. The buyer of the put option does not have to sell (it is optional, thats why it is called buying an option). However, the seller of the put is required to make good on their promise to the buyer. The broker can require the seller of the put option to have a deposit, called margin, to help make sure that they can make good on the promise. Profit... The buyer can profit from the put option if the stock price moves down substantially. The buyer of the put option does not need to own the stock, he can sell the option to someone else. If the buyer of the put option also owns the stock, the put option can be thought of like an insurance policy on the value of the stock. The seller of the put option profits if the stock price stays the same or rises. Basically, the seller comes out best if they can sell put options that no one ends up using by expiration day. A spread is an investment consisting of buying one option and selling another. Let's put bull and put and spread together with an example from Apple. So, if you believed Apple Inc. AAPL (currently 595.32) was going up or staying the same through JAN you could sell the 600 JAN put and buy the 550 put. If the price rises beyond 600, your profit would be the difference in price of the puts. Let's explore this a little deeper (prices from google finance 31 Oct 2012): Worst Case: AAPL drops below 550. The bull put spread investor owes (600-550)x100 shares = $5000 in JAN but received $2,035 for taking this risk. EDIT 2016: The \"\"worst case\"\" was the outcome in this example, the AAPL stock price on options expiry Jan 18, 2013 was about $500/share. Net profit = $2,035 - $5,000 = -$2965 = LOSS of $2965 Best Case: AAPL stays above 600 on expiration day in JAN. Net Profit = $2,035 - 0 = $2035 Break Even: If AAPL drops to 579.65, the value of the 600 JAN AAPL put sold will equal the $2,035 collected and the bull put spread investor will break even. Commissions have been ignored in this example.\"",
"title": ""
},
{
"docid": "5d259edeee629b44ab345346a0717829",
"text": "\"When you buy a put option, you're buying the right to sell stock at the \"\"strike\"\" price. To understand why you have to pay separately for that, consider the other side of the transaction. If I agree to trade stock for money at above market rates, I need to make up the difference somewhere or face bankruptcy. That risk of loss is what the option price is about. You might assume that means the market expects the price of AMD to fall to 8.01 from it's current price of 8.06 by the option expiration date. But that would also mean call options below the market price is worthless. But that's not quite true; people who price options need to factor in volatility, since things change with time. The price MIGHT fall, and traders need to account for that risk. So 1.99 roughly represents the probability of AMD rising to 10. There's probably some technical analysis one can do to the chain, but I don't see any abnormality of AMD here.\"",
"title": ""
},
{
"docid": "bba854ffdfbf0f35c47ae1787697e656",
"text": "One broker told me that I have to simply read the ask size and the bid size, seeing what the market makers are offering. This implies that my order would have to match that price exactly, which is unfortunate because options contract spreads can be WIDE. Also, if my planned position size is larger than the best bid/best ask, then I should break up the order, which is also unfortunate because most brokers charge a lot for options orders.",
"title": ""
},
{
"docid": "90dfc0db81605a307939ab82a25f7f97",
"text": "A simple example - When looking at oil trading in different locations first I have some back of the envelop adjustments for the grade of oil, then look at storage costs (irrelevant in the case of electricity) and transport costs between two locations to see if physical players are actively arbing the spread. No strong views on reading material in this specific area - Google, google scholar and amazon all have relevant material. When it comes to your current problem, here are some questions to think about: 1. Is the power generated from the same commodity at location A and location W? 2. How has the spread changed in the past? Has trading location W actively hedged the worst cases of prices moves in location A? 3. Is it feasible to trade the commodity that location A generates the majority of its power from/how does that compare to electricity trading at location W as a hedge? 4. If hedging is really desirable, are you sure you can't do an illiquid over the counter hedge at location A? Paying a little bit more in the bid/ask for the hedge could be more desirable than trying to jump into a market you yourselves don't quite understand. 5. If your consultants come back with just some hedge ratios without discussing what drives the spreads between the two locations and where the spreads are currently be skeptical.",
"title": ""
},
{
"docid": "a561e2ff079274876b663253e7d2d371",
"text": "\"You're correct that the trading costs would be covered by the expense ratio. Just to be clear here, the expense ratio is static and doesn't change very often. It's set in such a way that the fund manager *expects* it to cover *all* of their operational costs. It's not some sort of slider that they move around with their costs. I'm not familiar with any ETF providers doing agreements which cover rent and equipment (hedge funds do - see \"\"hedge fund hotels\"\"). ETF providers do routinely enter into agreements with larger institutions that cover stuff like marketing. PowerShares, for a while, outsourced all of the management of the Qs to BNY and was responsible solely for marketing it themselves.\"",
"title": ""
},
{
"docid": "3be2b64b0a6817534c811ba341dbca23",
"text": "I'm not exactly sure, but it may be due to liquidity preference. SPY has a much higher volume (30d average of roughly 70m vs. 3.3m, 1.9m for IVV, VOO respectively), and similarly has a narrow bid ask spread of about 0.01 compared to 0.02 for the other two. I could be wrong, but I'm going to leave this post up and look in to it later, I'm curious too. The difference is very consistent though, so it may be something in their methodology.",
"title": ""
},
{
"docid": "9ec10b3f7e1202acfe037a2259d8ce4d",
"text": "\"Mathematically it's arbitrary - you could just as easily use the bid or the midpoint as the denominator, so long as you're consistent when comparing securities. So there's not a fundamental reason to use the ask. The best argument I can come up with is that most analysis is done from the buy side, so looking at liquidity costs (meaning how much does the value drop instantaneously purely because of the bid-ask spread) when you buy a security would be more relevant by using the ask (purchase price) as the basis. Meaning, if a stock has a bid-ask range of $95-$100, if you buy the stock at $100 (the ask), you immediately \"\"lose\"\" 5% (5/100) of its value since you can only sell it for $95.\"",
"title": ""
},
{
"docid": "d65931bcdd9257af1f8355851a61b1f3",
"text": "A day is a long time and the rate is not the same all day. Some sources will report a close price that averages the bid and ask. Some sources will report a volume-weighted average. Some will report the last transaction price. Some will report a time-weighted average. Some will average the highest and lowest prices for the interval. Different marketplaces will also have slightly different prices because different traders are present at each marketplace. Usually, the documentation will explain what method they use and you can choose the source whose method makes the most sense for your application.",
"title": ""
},
{
"docid": "1fec42beb84e2821dd90cd035446ea8d",
"text": "Something like cost = a × avg_spreadb + c × volatilityd × (order_size/avg_volume)e. Different brokers have different formulas, and different trading patterns will have different coefficients.",
"title": ""
},
{
"docid": "7ccebb6bcea7089d89b1fd72e66e3b81",
"text": "Thank you for replying. I'm not sure I totally follow though, aren't you totally at mercy of the liquidity in the stock? I guess I'm havinga hard time visualizing the value a human can add as opposed to say vwapping it or something. I can accept that you're right, just having a difficult time picturing it",
"title": ""
}
] |
fiqa
|
1014bb04ba0ea7e98ea702d708243f69
|
Over how much time should I dollar-cost-average my bonus from cash into mutual funds?
|
[
{
"docid": "7e77bf16ae5bcbd90ff513efa7ea6c97",
"text": "The OP invests a large amount of money each year (30-40k), and has significant amount already invested. Some in the United States that face this situation may want to look at using the bonus to fund two years worth of IRA or Roth IRA. During the period between January 1st and tax day they can put money into a IRA or Roth IRA for the previous year, and for the current year. The two deposits might have to be made separately, because the tax year for each deposit must be specified. If the individual is married, they can also fund their spouses IRA or Roth IRA. If this bonus is this large every year, the double deposit can only be done the first time, but if the windfall was unexpected getting the previous years deposit done before tax day could be useful. The deposits for the current year could still be spread out over the next 12 months. EDIT: Having thought about the issue a little more I have realized there are other timing issues that need to be considered.",
"title": ""
},
{
"docid": "a5797d874e38e3192b00a936376f037f",
"text": "There have been studies which show that Dollar Cost Averaging (DCA) underperforms Lump Sum Investing (LSI). Vanguard, in particular, has published one such study. Of course, reading about advice in a study is one thing; acting on that advice can be something else entirely. We rolled over my wife's 401(k) to an IRA back in early 2007 and just did it as a lump sum. You know what happened after that. But our horizon was 25+ years at that time, so we didn't lose too much sleep over it (we haven't sold or gone to cash, either).",
"title": ""
},
{
"docid": "780c6434ce04ec3703731bc11fc10e7d",
"text": "I'm staring at this chart and asking myself, How long a period is enough to have an average I'd be happy with regardless of the direction the market goes? 3 years? 4 years? Clearly, a lump sum investment risks a 2000 buy at 1500. Not good. Honestly, I love the question, and find it interesting, but there's likely no exact answer, just some back and forth analysis. You're investing about $40K/yr anyway. I'd suggest a 4 year timeframe is a good time to invest the new money as well. Other folk want to offer opinions? Edit - with the OP's additional info, he expects these bonuses to continue, my updated advice is to DCA quarterly if going into assets with a transaction fee or monthly if into a no-fee fund, over just a one year period.",
"title": ""
},
{
"docid": "eca7b08aae740dccd9c59d0ec0679496",
"text": "Canadian Couch Potato has an article which is somewhat related. Ask the Spud: Can You Time the Markets? The argument roughly boils down to the following: That said, I didn't follow the advice. I inherited a sum of money, more than I had dealt with before, and I did not feel I was emotionally capable of immediately dumping it into my portfolio (Canadian stocks, US stocks, world stocks, Canadian bonds, all passive indexed mutual funds), and so I decided to add the money into my portfolio over the course of a year, twice a month. The money that I had not yet invested, I put into a money market account. That worked for me because I was purchasing mutual funds with no transaction costs. If you are buying ETFs, this strategy makes less sense. In hindsight, this was not financially prudent; I'd have been financially better off to buy all the mutual funds right at the beginning. But I was satisfied with the tradeoff, knowing that I did not have hindsight and I would have been emotionally hurt had the stock market crashed. There must be research that would prove, based on past performance, the statistically optimal time frame for dollar-cost averaging. However, I strongly suppose that the time frame is rather small, and so I would advise that you either invest the money immediately, or dollar-cost average your investment over the course of the year. This answer is not an ideal answer to your question because it is lacking such a citation.",
"title": ""
}
] |
[
{
"docid": "4a19eb29e6bbded4886ff2d5b424e236",
"text": "\"I have been considering a similar situation for a while now, and the advice i have been given is to use a concept called \"\"dollar cost averaging\"\", which basically amounts to investing say 10% a month over 10 months, resulting in your investment getting the average price over that period. So basically, option 3.\"",
"title": ""
},
{
"docid": "65f01efd7b05088c5b84148dd818e886",
"text": "Expenses matter. At the back end, retirement, the most often quoted withdrawal rate is 4%. How would it feel to be paying 1/4 of each years' income to fees, separate from the taxes due, separate from whether the market is up or down? Kudos to you for learning this lesson so early. Your plan is great, and while I often say 'don't let the tax tail wag the investing dog' being mindful of the tax hit in any planned transaction is worthwhile. Selling and moving enough funds to stay at 0% is great, a no-brainer, as they say. Selling more depends on the exact numbers involved. Do a fake return, and see how an extra $1000/$2000 etc, worth of fund sale impacts the taxes. It will depend on how much gain there is for each $XXX of fund. Say you are up 25%, So $1000 has $200 of gain. 15% of $200 is $30. A 1%/yr fee cost you $10/yr, so it's worth waiting till January to sell the next shares of the fund. Keep in mind, the 'test' return will still have the 2013 rates and brackets, I suggest this only as an estimating tool.",
"title": ""
},
{
"docid": "1c8bbe9235409f5c606a86859895a345",
"text": "That depends whether you're betting on the market going up, or down, during the year. If you don't like to bet (and I don't), you can take advantage of dollar cost averaging by splitting it up into smaller contributions throughout the year.",
"title": ""
},
{
"docid": "818f4cb44f509dfe75279353ce92a310",
"text": "In general, lump sum investing will tend to outperform dollar cost averaging because markets tend to increase in value, so investing more money earlier will generally be a better strategy. The advantage of dollar cost averaging is that it protects you in times when markets are overvalued, or prior to market corrections. As an extreme example, if you done a lump-sum investment in late 2008 and then suffered through the subsequent market crash, it may have taken you 2-3 years to get back to even. If you began a dollar cost averaging investment plan in late 2008, it may have only taken you a 6 months to get back to even. Dollar cost averaging can also help to reduce the urge to time the market, which for most investors is definitely a good thing.",
"title": ""
},
{
"docid": "ce6d317e89ec1170e735acd3e5886923",
"text": "\"Personally, I think you are approaching this from the wrong angle. You're somewhat correct in assuming that what you're reading is usually some kind of marketing material. Systematic Investment Plan (SIP) is not a universal piece of jargon in the financial world. Dollar cost averaging is a pretty universal piece of jargon in the financial world and is a common topic taught in finance classes in the US. On average, verified by many studies, individuals will generate better investment returns when they proactively avoid timing the market or attempting to pick specific winners. Say you decide to invest in a mutual fund, dollar cost averaging means you invest the same dollar amount in consistent intervals rather than buying a number of shares or buying sporadically when you feel the market is low. As an example I'll compare investing $50 per week on Wednesdays, versus 1 share per week on Wednesdays, or the full $850 on the first Wednesday. I'll use the Vanguard Large cap fund as an example (VLCAX). I realize this is not really an apples to apples comparison as the invested amounts are different, I just wanted to show how your rate of return can change depending on how your money goes in to the market even if the difference is subtle. By investing a common dollar amount rather than a common share amount you ultimately maintain a lower average share price while the share price climbs. It also keeps your investment easy to budget. Vanguard published an excellent paper discussing dollar cost averaging versus lump sum investing which concluded that you should invest as soon as you have funds, rather than parsing out a lump sum in to smaller periodic investments, which is illustrated in the third column above; and obviously worked out well as the market has been increasing. Ultimately, all of these companies are vying to customers so they all have marketing teams trying to figure out how to make their services sound interesting and unique. If they all called dollar cost averaging, \"\"dollar cost averaging\"\" none of them would appear to be unique. So they devise neat acronyms but it's all pretty much the same idea. Trickle your money in to your investments as the money becomes available to you.\"",
"title": ""
},
{
"docid": "b9d819ec9577a248f9bd639cd5dfe85e",
"text": "\"How often should one use dollar-cost averaging? Trivially, a dollar cost averaging (DCA) strategy must be used at least twice! More seriously, DCA is a discipline that people (typically investors with relatively small amounts of money to invest each month or each quarter) use to avoid succumbing to the temptation to \"\"time the market\"\". As mhoran_psprep points out, it is well-suited to 401k plans and the like (e.g. 403b plans for educational and non-profit institutions, 457 plans for State employees, etc), and indeed is actually the default option in such plans, since a fixed amount of money gets invested each week, or every two weeks, or every month depending on the payroll schedule. Many plans offer just a few mutual funds in which to invest, though far too many people, having little knowledge or understanding of investments, simply opt for the money-market fund or guaranteed annuity fund in their 4xx plans. In any case, all your money goes to work immediately since all mutual funds let you invest in thousandths of a share. Some 401k/403b/457 plans allow investments in stocks through a brokerage, but I think that using DCA to buy individual stocks in a retirement plan is not a good idea at all. The reasons for this are that not only must shares must be bought in whole numbers (integers) but it is generally cheaper to buy stocks in round lots of 100 (or multiples of 100) shares rather than in odd lots of, say, 37 shares. So buying stocks weekly, or biweekly or monthly in a 401k plan means paying more or having the money sit idle until enough is accumulated to buy 100 shares of a stock at which point the brokerage executes the order to buy the stock; and this is really not DCA at all. Worse yet, if you let the money accumulate but you are the one calling the shots \"\"Buy 100 shares of APPL today\"\" instead of letting the brokerage execute the order when there is enough money, you are likely to be timing the market instead of doing DCA. So, are brokerages useless in retirement fund accounts? No, they can be useful but they are not suitable for DCA strategies involving buying stocks. Stick to mutual funds for DCA. Do people use it across the board on all stock investments? As indicated above, using DCA to buy individual stocks is not the best idea, regardless of whether it is done inside a retirement plan or outside. DCA outside a retirement plan works best if you not trust yourself to stick with the strategy (\"\"Ooops, I forgot to mail the check yesterday; oh, well, I will do it next week\"\") but rather, arrange for your mutual fund company to take the money out of your checking account each week/month/quarter etc, and invest it in whatever fund(s) you have chosen. Most companies have such programs under names such as Automatic Investment Program (AIP) etc. Why not have your bank send the money to the mutual fund company instead? Well, that works too, but my bank charges me for sending the money whereas my mutual fund company does AIP for free. But YMMV. Dollar-cost averaging generally means investing a fixed amount of money on a periodic basis. An alternative strategy, if one has decided that owning 1200 shares of FlyByKnight Co is a good investment to have, is to buy round lots of 100 shares of FBKCO each month. The amount of money invested each month varies, but at the end of the year, the average cost of the 1200 shares is the average of the prices on the 12 days on which the investments were made. Of course, by the end of the year, you might not think FBKCO is worth holding any more. This technique worked best in the \"\"good old days\"\" when blue-chip stocks paid what was for all practical purposes a guaranteed dividend each year, and people bought these stocks with the intention of passing them on to their widows and children.\"",
"title": ""
},
{
"docid": "1e49d2b2c9c88cf6090f1836b3968990",
"text": "Theoretically there is always a time value of money. You'll need to keep your cash in a Money Market Fund to realize its potential (I'm not saying MMFs are the best investment strategy, they are the best kind of account for liquid cash). Choose an accounts that's flexible with regard to its minimum required so you can always keep this extra money in it and remove it when you need to make a payment.",
"title": ""
},
{
"docid": "eef9aedb0ad4b895b7f771712e625179",
"text": "If you are making regular periodic investments (e.g. each pay period into a 401(k) plan) or via automatic investment scheme in a non-tax-deferred portfolio (e.g. every month, $200 goes automatically from your checking account to your broker or mutual fund house), then one way of rebalancing (over a period of time) is to direct your investment differently into the various accounts you have, with more going into the pile that needs bringing up, and less into the pile that is too high. That way, you can avoid capital gains or losses etc in doing the selling-off of assets. You do, of course, take longer to achieve the balance that you seek, but you do get some of the benefits of dollar-cost averaging.",
"title": ""
},
{
"docid": "364ef9c8cb65d47d63f4f94816cb29d7",
"text": "There are a number of scholarly articles on the subject including a number at the end of the Vanguard article you reference. However, unfortunately like much of financial research you can't look at the articles without paying quite a bit. It is not easy to make a generic comparison between lump-sum and dollar cost averaging because there are many ways to do dollar cost averaging. How long do you average over? Do you evenly average or exponentially put the money to work? The easiest way to think about this problem though is does the extra compounding from investing more of the money immediately outweigh the chance that you may have invested all the money when the market is overvalued. Since the market is usually near the correct value investing in lump sum will usually win out as the Vanguard article suggests. As a side note, while using DCA on a large one time sum of money is generally not optimal, if you have a consistent salary DCA by frequently investing a portion of your salary has been frequently shown to be a very good idea of long periods over saving up a bunch of money and investing it all at once. In this case you get the compounding advantage of investing early and you avoid investing a large chunk of money when the market is overvalued.",
"title": ""
},
{
"docid": "5790337078c1c0fd24948a1f5458e974",
"text": "Your idea is a good one, but, as usual, the devil is in the details, and implementation might not be as easy as you think. The comments on the question have pointed out your Steps 2 and 4 are not necessarily the best way of doing things, and that perhaps keeping the principal amount invested in the same fund instead of taking it all out and re-investing it in a similar, but different, fund might be better. The other points for you to consider are as follows. How do you identify which of the thousands of conventional mutual funds and ETFs is the average-risk / high-gain mutual fund into which you will place your initial investment? Broadly speaking, most actively managed mutual fund with average risk are likely to give you less-than-average gains over long periods of time. The unfortunate truth, to which many pay only Lipper service, is that X% of actively managed mutual funds in a specific category failed to beat the average gain of all funds in that category, or the corresponding index, e.g. S&P 500 Index for large-stock mutual funds, over the past N years, where X is generally between 70 and 100, and N is 5, 10, 15 etc. Indeed, one of the arguments in favor of investing in a very low-cost index fund is that you are effectively guaranteed the average gain (or loss :-(, don't forget the possibility of loss). This, of course, is also the argument used against investing in index funds. Why invest in boring index funds and settle for average gains (at essentially no risk of not getting the average performance: average performance is close to guaranteed) when you can get much more out of your investments by investing in a fund that is among the (100-X)% funds that had better than average returns? The difficulty is that which funds are X-rated and which non-X-rated (i.e. rated G = good or PG = pretty good), is known only in hindsight whereas what you need is foresight. As everyone will tell you, past performance does not guarantee future results. As someone (John Bogle?) said, when you invest in a mutual fund, you are in the position of a rower in rowboat: you can see where you have been but not where you are going. In summary, implementation of your strategy needs a good crystal ball to look into the future. There is no such things as a guaranteed bond fund. They also have risks though not necessarily the same as in a stock mutual fund. You need to have a Plan B in mind in case your chosen mutual fund takes a longer time than expected to return the 10% gain that you want to use to trigger profit-taking and investment of the gain into a low-risk bond fund, and also maybe a Plan C in case the vagaries of the market cause your chosen mutual fund to have negative return for some time. What is the exit strategy?",
"title": ""
},
{
"docid": "5b67fa3ebd9f9eff76ad19f0552d7686",
"text": "Dollar cost averaging moderates risk. But you pay for this by giving up the chance for higher gains. If you took a hundred people and randomly had them fully buy into the market over a decade period, some of those people will do very well (relative to the rest) while others will do very poorly (relatively). If you dollar cost average, your performance would fall into the middle so you don't fall into the bottom (but you won't fall into the top either).",
"title": ""
},
{
"docid": "16a25ba54cca763a15a0b7ac4bcde9de",
"text": "The time horizon for your 401K/IRA is essentially the same, and it doesn't stop at the day you retire. On the day you do the rollover you will be transferring your funds into similar investments. S&P500 index to S&P 500 index; 20xx retirement date to 20xx retirement date; small cap to small cap... If your vested portion is worth X $'s when the funds are sold, that is the amount that will be transferred to the IRA custodian or the custodian for the new employer. Use the transfer to make any rebalancing adjustments that you want to make. But with as much as a year before you leave the company if you need to rebalance now, then do that irrespective of your leaving. Cash is what is transferred, not the individual stock or mutual fund shares. Only move your funds into a money market account with your current 401K if that makes the most sense for your retirement plan. Also keep in mind unless the amount in the 401K is very small you don't have to do this on your last day of work. Even if you are putting the funds in a IRA wait until you have started with the new company and so can define all your buckets based on the options in the new company.",
"title": ""
},
{
"docid": "d3741d5862564553029f431e8570eb66",
"text": "\"The mutual fund is legally its own company that you're investing in, with its own expenses. Mutual fund expense ratios are a calculated value, not a promise that you'll pay a certain percentage on a particular day. That is to say, at the end of their fiscal year, a fund will total up how much it spent on administration and divide it by the total assets under management to calculate what the expense ratio is for that year, and publish it in the annual report. But you can't just \"\"pay the fee\"\" for any given year. In a \"\"regular\"\" account, you certainly could look at what expenses were paid for each fund by multiplying the expense ratio by your investment, and use it in some way to figure out how much additional you want to contribute to \"\"make it whole\"\" again. But it makes about as much sense as trying to pay the commission for buying a single stock out of one checking account while paying for the share price out of another. It may help you in some sort of mental accounting of expenses, but since it's all your money, and the expenses are all part of what you're paying to be able to invest, it's not really doing much good since money is fungible. In a retirement account with contribution limits, it still doesn't really make sense, since any contribution from outside funds to try to pay for expense ratios would be counted as contributions like any other. Again, I guess it could somehow help you account for how much money you wanted to contribute in a year, but I'm not really sure it would help you much. Some funds or brokerages do have non-expense-ratio-based fees, and in some cases you can pay for those from outside the account. And there are a couple cases where for a retirement account this lets you keep your contributions invested while paying for fees from outside funds. This may be the kind of thing that your coworker was referring to, though it's hard to tell exactly from your description. Usually it's best just to have investments with as low fees as possible regardless, since they're one of the biggest drags on returns, and I'd be very wary of any brokerage-based fees when there are very cheap and free mutual fund brokerages out there.\"",
"title": ""
},
{
"docid": "993793d6dcee694fa8034a12ea35d61e",
"text": "Can you isolate the market impact to just the Fed's quantitative easing? Can you rule out the future economic predictions of low growth and that there are reasons why the Fed has kept rates low and is trying its best to stimulate the economy? Just something to consider here. The key is to understand what is the greater picture here as well as the question of which stock market index are you looking at that has done so badly. Some stocks may be down and others may be up so it isn't necessarily bad for all equally.",
"title": ""
},
{
"docid": "e2f9b8faa0d16414f9b1f39f9b0199f3",
"text": "I think it depends on who is being paid for your app. Do you have a company the is being paid? Or is it you personally? If you have a company then that income will disappear by offsetting it through expenses to get the software developed. If they are paying you personally then you can probably still get the income to disappear by file home-office expenses. I think either way you need to talk to an accountant. If you don't want to mess with it since the amount of income is small then I would think you can file it as additional income (maybe a 1099).",
"title": ""
}
] |
fiqa
|
24533db1d385ba6c9a81c550c88cf030
|
Is there any reason to choose my bank's index fund over Vanguard?
|
[
{
"docid": "6fc9945af9c41291f054e379070cc7d6",
"text": "That expense ratio on the bank fund is criminally high. Use the Vanguard one, they have really low expenses.",
"title": ""
},
{
"docid": "0918254a089cca9fd94fee63324ec519",
"text": "\"Your bank's fund is not an index fund. From your link: To provide a balanced portfolio of primarily Canadian securities that produce income and capital appreciation by investing primarily in Canadian money market instruments, debt securities and common and preferred shares. This is a very broad actively managed fund. Compare this to the investment objective listed for Vanguard's VOO: Invests in stocks in the S&P 500 Index, representing 500 of the largest U.S. companies. There are loads of market indices with varying formulas that are supposed to track the performance of a market or market segment that they intend to track. The Russel 2000, The Wilshire 1000, The S&P 500, the Dow Industrial Average, there is even the SSGA Gender Diversity Index. Some body comes up with a market index. An \"\"Index Fund\"\" is simply a Mutual Fund or Exchange Traded Fund (ETF) that uses a market index formula to make it's investment decisions enabling an investor to track the performance of the index without having to buy and sell the constituent securities on their own. These \"\"index funds\"\" are able to charge lower fees because they spend $0 on research, and only make investment decisions in order to track the holdings of the index. I think 1.2% is too high, but I'm coming from the US investing world it might not be that high compared to Canadian offerings. Additionally, comparing this fund's expense ratio to the Vanguard 500 or Total Market index fund is nonsensical. Similarly, comparing the investment returns is nonsensical because one tracks the S&P 500 and one does not, nor does it seek to (as an example the #5 largest holding of the CIBC fund is a Government of Canada 2045 3.5% bond). Everyone should diversify their holdings and adjust their investment allocations as they age. As you age you should be reallocating away from highly volatile common stock and in to assets classes that are historically more stable/less volatile like national government debt and high grade corporate/local government debt. This fund is already diversified in to some debt instruments, depending on your age and other asset allocations this might not be the best place to put your money regardless of the fees. Personally, I handle my own asset allocations and I'm split between Large, Mid and Small cap low-fee index funds, and the lowest cost high grade debt funds available to me.\"",
"title": ""
},
{
"docid": "0b670b29a3d3a76a766776efbe58ece6",
"text": "Extortionate expense ratio aside, comparing the fund to the vanguard balanced fund (with an expense ratio of 0.19%) shows that your bank's fund has underperformed in literally every shown time period. Mind you, the vanguard fund is all US stocks and bonds which have done very well whereas the CIBC fund is mostly Canadian. Looking at the CIBC top 10 holdings does seem to suggest that it's (poorly) actively managed instead of being an index tracker for what that's worth. Maybe your bank offers cheaper transaction costs when buying their own funds but even then the discount would have to be pretty big to make up for the underperformance. Basically, go Vanguard here.",
"title": ""
},
{
"docid": "b9bc2704543ef45b92937fea547e721d",
"text": "Basically, no. Selecting an actively managed fund over a low-fee index fund means paying for the opportunity to possibly outperform the index fund. A Random Walk Down Wall Street by Burton Malkiel argues that the best general strategy for the average investor is to select the index fund because the fee savings are certain. Assuming a random walk means that any mutual fund may outperform the index in some years, but this is not an indication that it will overall. Unless you have special information about the effectiveness of the bank fund management (it's run by the next Warren Buffett), you are better off in the index fund. And even Warren Buffett suggests you are probably better off in the index fund: This year, regarding Wall Street, Buffett wrote: “When trillions of dollars are managed by Wall Streeters charging high fees, it will usually be the managers who reap outsized profits, not the clients. Both large and small investors should stick with low-cost index funds.”",
"title": ""
}
] |
[
{
"docid": "6e9c4642ac9007f637230e47ac684d37",
"text": "Meh. Seems like splitting hairs to me. I've tried to get Vanguard to open fossil-free index funds as Barclays has (and to which I moved heaping helpings of my Vanguard money) so maybe I'm part of the problem. By the by, those fossil free funds have been outperforming their fossilized index counterparts.",
"title": ""
},
{
"docid": "6ae1356d942a1f11b3d2191aadab1c0b",
"text": "Placing bets on targeted sectors of the market totally makes sense in my opinion. Especially if you've done research, with a non-biased eye, that convinces you those sectors will continue to outperform. However, the funds you've boxed in red all appear to be actively managed funds (I only double-checked on the first.) There is a bit of research showing that very few active managers consistently beat an index over the long term. By buying these funds, especially since you hope to hold for decades, you are placing bets that these managers maintain their edge over an equivalent index. This seems unlikely to be a winning bet the longer you hold the position. Perhaps there are no sector index funds for the sectors or focuses you have? But if there were, and it was my money that I planned to park for the long term, I'd pick the index fund over the active managed fund. Index funds also have an advantage in costs or fees. They can charge substantially less than an actively managed fund does. And fees can be a big drag on total return.",
"title": ""
},
{
"docid": "3a16e38607c9d834e9d46ff63df423c5",
"text": "No I get that. But if you don’t want risk, then buy bonds. Long term an S&P Index has very low risk. On the other hand, actively managed funds have fees that take out a ton of the gain that could be had. I don’t have time to look for the study but I read recently that 97% of actively managed funds were outperformed by S&P Indexes after fees. Now I don’t know about you but I think the risk of not picking a top 3% fund is probably higher than the safe return of index’s.",
"title": ""
},
{
"docid": "d9e1eabed9baab993878f36c4cd990f2",
"text": "It's very simple. The low cost index funds are generally the best investments for investors, but - because of the low fees and the fact that the offerings of different companies are nearly identical - they are the worst for the investment houses. Therefore, the investment houses spend a lot of money convincing investors to choose other funds. If you remember that investment houses are all in the business of making money for themselves, not for the investor, then the whole financial system will make much more sense.",
"title": ""
},
{
"docid": "8b90dc3f316e64f6d93f0fd4e355334d",
"text": "An index fund is inherently diversified across its index -- no one stock will either make or break the results. In that case it's a matter of picking the index(es) you want to put the money into. ETFs do permit smaller initial purchases, which would let you do a reasonable mix of sectors. (That seems to be the one advantage of ETFs over traditional funds...?)",
"title": ""
},
{
"docid": "7129104fb2ab770f186c5882f2e6074c",
"text": "\"when the index is altered to include new players/exclude old ones, the fund also adjusts The largest and (I would say) most important index funds are whole-market funds, like \"\"all-world-ex-US\"\", or VT \"\"Total World Stock\"\", or \"\"All Japan\"\". (And similarly for bonds, REITS, etc.) So companies don't leave or enter these indexes very often, and when they do (by an initial offering or bankruptcy) it is often at a pretty small value. Some older indices like the DJIA are a bit more arbitrary but these are generally not things that index funds would try to match. More narrow sector or country indices can have more of this effect, and I believe some investors have made a living from index arbitrage. However well run index funds don't need to just blindly play along with this. You need to remember that an index fund doesn't need to hold precisely every company in the index, they just need to sample such that they will perform very similarly to the index. The 500th-largest company in the S&P 500 is not likely to have all that much of an effect on the overall performance of the index, and it's likely to be fairly correlated to other companies in similar sectors, which are also covered by the index. So if there is a bit of churn around the bottom of the index, it doesn't necessarily mean the fund needs to be buying and selling on each transition. If I recall correctly it's been shown that holding about 250 stocks gives you a very good match with the entire US stock market.\"",
"title": ""
},
{
"docid": "7f5297c019677d5e757c6de33dcde6e5",
"text": "When you are putting your money in an index fund, you are not betting your performance against other asset classes but rather against competing investments withing the SAME asset class. The index fund always wins due to two factors: diversity, and lower cost. The lower cost attribute is essentially where you get your performance edge over the longer run. That is why if you look at the universe of mutual funds (where you get your diversification), very few will have beaten the index, assuming they have survived. -Ralph Winters",
"title": ""
},
{
"docid": "f733c669f45268778a0bccf62fb4aab9",
"text": "Vanguard has a lot of mutual fund offerings. (I have an account there.) Within the members' section they give indications of the level of risk/reward for each fund.",
"title": ""
},
{
"docid": "119a6b3a616e6ba5f32ab33c55c6b746",
"text": "So, why or why should I not invest in the cheaper index fund? They are both same, one is not cheaper than other. You get something that is worth $1000. To give a simple illustration; There is an item for $100, Vanguard creates 10 Units out of this so price per unit is $10. Schwab creates 25 units out of this, so the per unit price is $4. Now if you are looking at investing $20; with Vanguard you would get 2 units, with Schwab you would get 5 units. This does not mean one is cheaper than other. Both are at the same value of $20. The Factors you need to consider are; Related question What differentiates index funds and ETFs?",
"title": ""
},
{
"docid": "03e5e991176e44bbed3ca310f9fb51b0",
"text": "One reason it matters whether or not you're beating the S&P 500 (or the Wilshire 5000, or whatever benchmark you choose to use) is to determine whether or not you'd be better off investing in an index fund (or some other investment vehicle) instead of pursuing whatever your current investment strategy happens to be. Even if your investment strategy makes money, earning what the S&P 500 has averaged over multiple decades (around 10%) with an index fund means a lot more money than a 5% return with an actively managed portfolio (especially when you consider factors like compound interest and inflation). I use the S&P 500 as one of my criteria for judging how well (or poorly) my financial adviser is doing for me. If his recommendations (or trading activity on my behalf, if authorized) are inferior to the S&P 500, for too long, then I have a basis to discontinue the relationship. Check out this Wikipedia entry on stock market indices. There are legitimate criticisms, but on the whole I think they are useful. As an aside, the reason I point to index funds specifically is that they are the one of the lowest-cost, fire-and-forget investment strategies around. If you compare the return of the S&P 500 index over multiple decades with most actively managed mutual funds, the S&P 500 index comes out ahead.",
"title": ""
},
{
"docid": "524c23a6c5119818456cf14353b617db",
"text": "\"Vanguard's Admiral shares are like regular (\"\"investor\"\") shares in their funds, only they charge lower expense ratios. They have higher investment minimums, though. (For instance, the Vanguard Total Stock Market Index Fund has a minimum of $3,000 and an expense ratio of .18% for the Investor Shares class, but a minimum of $10,000 and an expense ratio of .07% for Admiral Shares). If you've bought a bunch of investor shares and now meet the (recently-reduced) minimum for Admiral shares, or if you have some and buy some more investor shares in the future and meet the minimums, you will qualify for a free, no-tax-impact conversion to the Admiral Shares and save yourself some money. For more information, see the Vanguard article on their recent changes to Admiral Shares minimums. Vanguard also offers institutional-class shares with even lower expense ratios than that (with a minimum of $5 million, .06% expense ratios on the same fund). A lot of the costs of operating a fund are per-individual, so they don't need to charge you extra fees for putting in more money after a certain point. They'd rather be competitive and offer it at cost. Vanguard's funds typically have very low expense ratios to begin with. (The investor shares I've been using as an example are advertised as \"\"84% lower than the average expense ratio of funds with similar holdings\"\".) In fact, Vanguard's whole reason for existing is the premise (stated in founder John C Bogle's undergraduate thesis at Princeton) that individuals can generally get better returns by investing in a cheap fund that tracks an index than by investing in mutual funds that try to pick stocks and beat the index and charge you a steep markup. The average real return of the stock market is supposedly something like 4%; even a small-looking percentage like 1% can eat a big portion of that. Over the course of 40 years waiting for retirement, saving 1% on expenses could leave you with something like 50% more money when you've retired. If you are interested in the lower expense ratios of the Admiral share classes but cannot meet the minimums, note that funds which are available as ETFs can be traded from Vanguard brokerage accounts commission-free and typically charge the same expense ratios as the Admiral shares without any minimums (but you need to trade them as individual shares, and this is less convenient than moving them around in specific dollar amounts).\"",
"title": ""
},
{
"docid": "4e5d97779d66424a1f1b251caeed7bf6",
"text": "and seems to do better than the S&P 500 too. No, that's not true. In fact, this fund is somewhere between S&P500 and the NASDAQ Composite indexes wrt to performance. From my experience (I have it too), it seems to fall almost in the middle between SPY and QQQ in daily moves. So it does provide diversification, but you're basically diversifying between various indexes. The cost is the higher expense ratios (compare VTI to VOO).",
"title": ""
},
{
"docid": "6e4f01017045a7b9ef74ebae91eacf5a",
"text": "\"I actually love this question, and have hashed this out with a friend of mine where my premise was that at some volume of money it must be advantageous to simply track the index yourself. There some obvious touch-points: Most people don't have anywhere near the volume of money required for even a $5 commission outweigh the large index fund expense ratios. There are logistical issues that are massively reduced by holding a fund when it comes to winding down your investment(s) as you get near retirement age. Index funds are not touted as categorically \"\"the best\"\" investment, they are being touted as the best place for the average person to invest. There is still a management component to an index like the S&P500. The index doesn't simply buy a share of Apple and watch it over time. The S&P 500 isn't simply a single share of each of the 500 larges US companies it's market cap weighted with frequent rebalancing and constituent changes. VOO makes a lot of trades every day to track the S&P index, \"\"passive index investing\"\" is almost an oxymoron. The most obvious part of this is that if index funds were \"\"the best\"\" way to invest money Berkshire Hathaway would be 100% invested in VOO. The argument for \"\"passive index investing\"\" is simplified for public consumption. The reality is that over time large actively managed funds have under-performed the large index funds net of fees. In part, the thrust of the advice is that the average person is, or should be, more concerned with their own endeavors than they are managing their savings. Investment professionals generally want to avoid \"\"How come I my money only returned 4% when the market index returned 7%? If you track the index, you won't do worse than the index; this helps people sleep better at night. In my opinion the dirty little secret of index funds is that they are able to charge so much less because they spend $0 making investment decisions and $0 on researching the quality of the securities they hold. They simply track an index; XYZ company is 0.07% of the index, then the fund carries 0.07% of XYZ even if the manager thinks something shady is going on there. The argument for a majority of your funds residing in Mutual Funds/ETFs is simple, When you're of retirement age do you really want to make decisions like should I sell a share of Amazon or a share of Exxon? Wouldn't you rather just sell 2 units of SRQ Index fund and completely maintain your investment diversification and not pay commission? For this simplicity you give up three basis points? It seems pretty reasonable to me.\"",
"title": ""
},
{
"docid": "d5aef11d085a3dd22f8ef4a9e831aea5",
"text": "\"Couple of clarifications to start off: Index funds and ETF's are essentially the same investments. ETF's allow you to trade during the day but also make you reinvest your dividends manually instead of doing it for you. Compare VTI and VTSAX, for example. Basically the same returns with very slight differences in how they are run. Because they are so similar it doesn't matter which you choose. Either index funds and ETF's can be purchased through a regular taxable brokerage account or through an IRA or Roth IRA. The decision of what fund to use and whether to use a brokerage or IRA are separate. Whole market index funds will get you exposure to US equity but consider also diversifying into international equity, bonds, real estate (REITS), and emerging markets. Any broker can give you advice on that score or you can get free advice from, for example, Future Advisor. Now the advice: For most people in your situation, you current tax rate is currently very low. This makes a Roth IRA a very reasonable idea. You can contribute $5,500 for 2015 if you do it before April 15 and you can contribute $5,500 for 2016. Repeat each year. You won't be able to get all your money into a Roth, but anything you can do now will save you money on taxes in the long run. You put after-tax money in a Roth IRA and then you don't pay taxes on it or the gains when you take it out. You can use Roth IRA funds for college, for a first home, or for retirement. A traditional IRA is not recommended in your case. That would save you money on taxes this year, when presumably your taxes are already low. Since you won't be able to put all your money in the IRA, you can put the rest in a regular taxable brokerage account (if you don't just want to put it in a savings account). You can buy the same types of things as you have in your IRA. Note that if your stocks (in your regular brokerage account) go up over the course of a year and your income is low enough to be in the 10 or 15% tax bracket and you have held the stock for at least a year, you should sell before the end of the year to lock in your gains and pay taxes on them at the capital gains rate of 0%. This will prevent you from paying a higher rate on those gains later. Conversely, if you lose money in a year, don't sell. You can sell and lock in losses during years when your taxes are high (presumably, after college) to reduce your tax burden in those years (this is called \"\"tax loss harvesting\"\"). Sounds like crazy contortions but the name of the game is (legally) avoiding taxes. This is at least as important to your overall wealth as the decision of which funds to buy. Ok now the financial advisor. It's up to you. You can make your own financial decisions and save the money but it requires you putting in the effort to be educated. For many of us, this education is fun. Also consider that if you use a regular broker, like Fidelity, you can call up and they have people who (for free) will give you advice very similar to what you will get from the advisor you referred to. High priced financial advisors make more sense when you have a lot of money and complicated finances. Based on your question, you don't strike me as having those. To me, 1% sounds like a lot to pay for a simple situation like yours.\"",
"title": ""
},
{
"docid": "bd36cc84ea10cfdc1920099d015b5085",
"text": "Why don't you look at the actual funds and etfs in question rather than seeking a general conclusion about all pairs of funds and etfs? For example, Vanguard's total stock market index fund (VTSAX) and ETF (VTI). Comparing the two on yahoo finance I find no difference over the last 5 years visually. For a different pair of funds you may find something very slightly different. In many cases the index fund and ETF will not have the same benchmark and fees so comparisons get a little more cloudy. I recall a while ago there was an article that was pointing out that at the time emerging market ETF's had higher fees than corresponding index funds. For this reason I think you should examine your question on a case-by-case basis. Index fund and ETF returns are all publicly available so you don't have to guess.",
"title": ""
}
] |
fiqa
|
729467dcbc2b49c1b5e4f4bf6aa12aeb
|
What are these fees attached to mutual fund FSEMX?
|
[
{
"docid": "a286b75a29218a3fd4c1ff216ddc054a",
"text": "Annual-report expense ratios reflect the actual fees charged during a particular fiscal year. Prospectus Expense Ratio (net) shows expenses the fund company anticipates will actually be borne by the fund's shareholders in the upcoming fiscal year less any expense waivers, offsets or reimbursements. Prospectus Gross Expense Ratio is the percentage of fund assets used to pay for operating expenses and management fees, including 12b-1 fees, administrative fees, and all other asset-based costs incurred by the fund, except brokerage costs. Fund expenses are reflected in the fund's NAV. Sales charges are not included in the expense ratio. All of these ratios are gathered from a fund's prospectus.",
"title": ""
},
{
"docid": "3b59b30300158f1e9a548311f157fde3",
"text": "\"FSEMX has an annual expense ratio of 0.1% which is very low. What that means is that each month, the FSEMX will pay itself one-twelfth of 0.1% of the total value of all the shares owned by the shareholders in the mutual fund. If the fund has cash on hand from its trading activities or dividends collected from companies whose stock is owned by FSEMX or interest on bonds owned by FSEMX, the money comes out of that, but if there is no such pot (or the pot is not large enough), then the fund manager has the authority to sell some shares of the stocks held by FSEMX so that the employees can be paid, etc. If the total of cash generated by the trading and the dividend collection in a given year is (say) 3% of the share value of all the outstanding mutual fund, then only 2.9% will be paid out as dividend and capital gain distribution income to the share holders, the remaining 0.1% already having been paid to FSEMX management for operating expenses. It is important to keep in mind that expenses are always paid even if there are no profits, or even if there are losses that year so that no dividends or capital gains distributions are made. You don't see the expenses explicitly on any statement that you receive. If FSEMX sells shares of stocks that it holds to pay the expenses, this reduces the share value (NAV) of the mutual fund shares that you hold. So, if your mutual fund account \"\"lost\"\" 20% in value that year because the market was falling, and you got no dividend or capital gains distributions either, remember that only 19.9% of that loss can be blamed on the President or Congress or Wall Street or public-sector unions or your neighbor's refusal to ditch his old PC in favor of a new Mac, and the rest (0.1%) has gone to FSEMX to pay for fees you agreed to when you bought FSEMX shares. If you invest directly in FSEMX through Fidelity's web site, there is no sales charge, and you pay no expenses other than the 0.1% annual expense ratio. There is a fee for selling FSEMX shares after owning them only for a short time since the fund wants to discourage short-term investors. Whatever other fees finance.yahoo.com lists might be descriptive of the uses that FSEMX puts its expense ratio income to in its internal management, but are not of any importance to the prudent investor in FSEMX who will never encounter them or have to pay them.\"",
"title": ""
}
] |
[
{
"docid": "1c2fb38a15c99bf28d50cb7d0d6e7c5a",
"text": "Merrill charges $500 flat fee to (I assume purchase) my untraded or worthless security. In my case, it's an OTC stock whose management used for a microcap scam, which resulted in a class action lawsuit, etc. but the company is still listed on OTC and I'm stuck with 1000s of shares. (No idea about the court decision)",
"title": ""
},
{
"docid": "0f575010cfb2d70008bd14a524d90fbf",
"text": "\"Its a broker fee, not something charged by the reorganizing company. E*Trade charge $20, TD Ameritrade charge $38. As with any other bank fee - shop around. If you know the company is going to do a split, and this fee is of a significant amount for you - move your account to a different broker. It may be that some portion of the fee is shared by the broker with the shares managing services provider of the reorgonizing company, don't know for sure. But you're charged by your broker. Note that the fees differ for voluntary and involuntary reorganizations, and also by your stand with the broker - some don't charge their \"\"premier\"\" customers.\"",
"title": ""
},
{
"docid": "eb84e724bb226333f80ea5fc01b6df45",
"text": "\"In many cases the expenses are not pulled out on a specific day, so this wouldn't work. On the other hand some funds do charge an annual or quarterly fee if your investment in the fund is larger than the minimum but lower than a \"\"small balance\"\" value. Many funds will reduce or eliminate this fee if you signup for electronic forms or other electronic services. Some will also eliminate the fee if the total investment in all your funds is above a certain level. For retirement funds what you suggest could be made more complex because of annual limits. Though if you were below the limits you could decide to add the extra funds to cover those expenses as the end of the year approached.\"",
"title": ""
},
{
"docid": "5e8494e54f4125111114c7361174730d",
"text": "\"Am I wrong? Yes. The exchanges are most definitely not \"\"good ole boys clubs\"\". They provide a service (a huge, liquid and very fast market), and they want to be paid for it. Additionally, since direct participants in their system can cause serious and expensive disruptions, they allow only organizations that know what they're doing and can pay for any damages the cause. Is there a way to invest without an intermediary? Certainly, but if you have to ask this question, it's the last thing you should do. Typically such offers are only superior to people who have large investments sums and know what they're doing - as an inexperienced investor, chances are that you'll end up losing everything to some fraudster. Honestly, large exchanges have become so cheap (e.g. XETRA costs 2.52 EUR + 0.0504% per trade) that if you're actually investing, then exchange fees are completely irrelevant. The only exception may be if you want to use a dollar-cost averaging strategy and don't have a lot of cash every month - fixed fees can be significant then. Many banks offer investments plans that cover this case.\"",
"title": ""
},
{
"docid": "0f25b9fbec9ffacf7aed54f24f4be5ec",
"text": "In the absence of a country designation where the mutual fund is registered, the question cannot be fully answered. For US mutual funds, the N.A.V per share is calculated each day after the close of the stock exchanges and all purchase and redemption requests received that day are transacted at this share price. So, the price of the mutual fund shares for April 2016 is not enough information: you need to specify the date more accurately. Your calculation of what you get from the mutual fund is incorrect because in the US, declared mutual fund dividends are net of the expense ratio. If the declared dividend is US$ 0.0451 per share, you get a cash payout of US$ 0.0451 for each share that you own: the expense ratio has already been subtracted before the declared dividend is calculated. The N.A.V. price of the mutual fund also falls by the amount of the per-share dividend (assuming that the price of all the fund assets (e.g. shares of stocks, bonds etc) does not change that day). Thus. if you have opted to re-invest your dividend in the same fund, your holding has the same value as before, but you own more shares of the mutual fund (which have a lower price per share). For exchange-traded funds, the rules are slightly different. In other jurisdictions, the rules might be different too.",
"title": ""
},
{
"docid": "a336e432920f71cf5cf7ca918fa8eb41",
"text": "I have a bank account in the US from some time spent there a while back. When I wanted to move most of the money to the UK (in about 2006), I used XEtrade who withdrew the money from my US account and sent me a UK cheque. They might also offer direct deposit to the UK account now. It was a bit of hassle getting the account set up and linked to my US account, but the transaction itself was straightforward. I don't think there was a specific fee, just spread on the FX rate, but I can't remember for certain now - I was transfering a few thousand dollars, so a relatively small fixed fee would probably not have bothered me too much.",
"title": ""
},
{
"docid": "d3741d5862564553029f431e8570eb66",
"text": "\"The mutual fund is legally its own company that you're investing in, with its own expenses. Mutual fund expense ratios are a calculated value, not a promise that you'll pay a certain percentage on a particular day. That is to say, at the end of their fiscal year, a fund will total up how much it spent on administration and divide it by the total assets under management to calculate what the expense ratio is for that year, and publish it in the annual report. But you can't just \"\"pay the fee\"\" for any given year. In a \"\"regular\"\" account, you certainly could look at what expenses were paid for each fund by multiplying the expense ratio by your investment, and use it in some way to figure out how much additional you want to contribute to \"\"make it whole\"\" again. But it makes about as much sense as trying to pay the commission for buying a single stock out of one checking account while paying for the share price out of another. It may help you in some sort of mental accounting of expenses, but since it's all your money, and the expenses are all part of what you're paying to be able to invest, it's not really doing much good since money is fungible. In a retirement account with contribution limits, it still doesn't really make sense, since any contribution from outside funds to try to pay for expense ratios would be counted as contributions like any other. Again, I guess it could somehow help you account for how much money you wanted to contribute in a year, but I'm not really sure it would help you much. Some funds or brokerages do have non-expense-ratio-based fees, and in some cases you can pay for those from outside the account. And there are a couple cases where for a retirement account this lets you keep your contributions invested while paying for fees from outside funds. This may be the kind of thing that your coworker was referring to, though it's hard to tell exactly from your description. Usually it's best just to have investments with as low fees as possible regardless, since they're one of the biggest drags on returns, and I'd be very wary of any brokerage-based fees when there are very cheap and free mutual fund brokerages out there.\"",
"title": ""
},
{
"docid": "ebd2083d3c4dfd4d089cf638a06602e2",
"text": "One thing I would add to @littleadv (buy an ETF instead of doing your own) answer would be ensure that the dividend yield matches. Expense ratios aren't the only thing that eat you with mutual funds: the managers can hold on to a large percentage of the dividends that the stocks normally pay (for instance, if by holding onto the same stocks, you would normally receive 3% a year in dividends, but by having a mutual fund, you only receive .75%, that's an additional cost to you). If you tried to match the DJIA on your own, you would have an advantage of receiving the dividend yields on the stocks paying dividends. The downsides: distributing your investments to match and the costs of actual purchases.",
"title": ""
},
{
"docid": "3d12c0c2e49ae772068d2367c496cb88",
"text": "0.13% is a pretty low fee. PTTRX expenses are 0.45%, VINIX expenses are 0.04%. So based on your allocation, you end up with at least 0.08%. While lower than 0.13%, don't know if it is worth the trouble (and potentially fees) of monthly re-balancing.",
"title": ""
},
{
"docid": "7a55c44dfb0435d43f0e98deac371602",
"text": "ETrade allows this without fees (when investing into one of the No-Load/No-Fees funds from their list). The Sharebuilder plan is better when investing into ETF's or stocks, not for mutual funds, their choice (of no-fees funds) is rather limited on Sharebuilder.",
"title": ""
},
{
"docid": "5ccb32cd8143fa9afeef7fda8339111b",
"text": "In the US, expense ratios are stated in the Prospectus of the fund, which you must acknowledge as having read before the fund will accept your money to invest. You never acknowledged any such thing? Actually you did when you checked the box saying that you accept the Terms of the Agreement when you made the investment. The expense ratio can be changed by vote of the Board of Directors of the fund but the change must be included in the revised Prospectus of the fund, and current investors must be informed of the change. This can be a direct mailing (or e-mailing) from the mutual fund or an invitation to read the new Prospectus on the fund's website for those who have elected to go paperless. So, yes, the expense ratio can be changed (though not by the manager of the fund, e.g. just because he/she wants a bigger salary or a fancier company car, as you think), and not without notice to investors.",
"title": ""
},
{
"docid": "704b6900ee772c3bc8f88707d1921036",
"text": "I'm not a professional, but my understanding is that US funds are not considered PFICs regardless of the fact that they are held in a foreign brokerage account. In addition, be aware that foreign stocks are not considered PFICs (although foreign ETFs may be).",
"title": ""
},
{
"docid": "667f5ee83a6fccf6901ac2c01fee122a",
"text": "I see a couple of reasons why you could consider choosing a mutual fund over an ETF In some cases index mutual funds can be a cheaper alternative to ETFs. In the UK where I am based, Fidelity is offering a management fee of 0.07% on its FTSE All shares tracker. Last time I checked, no ETF was beating that There are quite a few cost you have to foot when dealing ETFs In some cases, when dealing for relatively small amounts (e.g. a monthly investment plan) you can get a better deal, if your broker has negotiated discounts for you with a fund provider. My broker asks £12.5 when dealing in shares (£1.5 for the regular investment plan) whereas he asks £0 when dealing in funds and I get a 100% discount on the initial charge of the fund. As a conclusion, I would suggest you look at the all-in costs over total investment period you are considering for the exact amount you are planning to invest. Despite all the hype, ETFs are not always the cheapest alternative.",
"title": ""
},
{
"docid": "d1eee4f33571648fb95733b26e6f5736",
"text": "\"Here's an example that I'm trying to figure out. ETF firm has an agreement with GS for blocks of IBM. They have agreed on daily VWAP + 1% for execution price. Further, there is a commission schedule for 5 mils with GS. Come month end, ETF firm has to do a monthly rebalance. As such must buy 100,000 shares at IBM which goes for about $100 The commission for the trade is 100,000 * 5 mils = $500 in commission for that trade. I assume all of this is covered in the expense ratio. Such that if VWAP for the day was 100, then each share got executed to the ETF at 101 (VWAP+ %1) + .0005 (5 mils per share) = for a resultant 101.0005 cost basis The ETF then turns around and takes out (let's say) 1% as the expense ratio ($1.01005 per share) I think everything so far is pretty straight forward. Let me know if I missed something to this point. Now, this is what I'm trying to get my head around. ETF firm has a revenue sharing agreement as well as other \"\"relations\"\" with GS. One of which is 50% back on commissions as soft dollars. On top of that GS has a program where if you do a set amount of \"\"VWAP +\"\" trades you are eligible for their corporate well-being programs and other \"\"sponsorship\"\" of ETF's interests including helping to pay for marketing, rent, computers, etc. Does that happen? Do these disclosures exist somewhere?\"",
"title": ""
},
{
"docid": "f5712c11a97266c6e2a9309ec306d034",
"text": "You do realize that the fund will have management expenses that are likely already factored into the NAV and that when you sell, the NAV will not yet be known, right? There are often fees to run a mutual fund that may be taken as part of managing the fund that are already factored into the Net Asset Value(NAV) of the shares that would be my caution as well as possible fee changes as Dilip Sarwate notes in a comment. Expense ratios are standard for mutual funds, yes. Individual stocks that represent corporations not structured as a mutual fund don't declare a ratio of how much are their costs, e.g. Apple or Google may well invest in numerous other companies but the costs of making those investments won't be well detailed though these companies do have non-investment operations of course. Don't forget to read the fund's prospectus as sometimes a fund will have other fees like account maintenance fees that may be taken out of distributions as well as being aware of how taxes will be handled as you don't specify what kind of account these purchases are being done using.",
"title": ""
}
] |
fiqa
|
41636fdb6bd2eea6ff7b7b10f5bd11e8
|
What cost basis accounting methods are applicable to virtual currencies?
|
[
{
"docid": "7272c31978e10ac0038691e7e9e1f605",
"text": "\"The only \"\"authoritative document\"\" issued by the IRS to date relating to Cryptocurrencies is Notice 2014-21. It has this to say as the first Q&A: Q-1: How is virtual currency treated for federal tax purposes? A-1: For federal tax purposes, virtual currency is treated as property. General tax principles applicable to property transactions apply to transactions using virtual currency. That is to say, it should be treated as property like any other asset. Basis reporting the same as any other property would apply, as described in IRS documentation like Publication 550, Investment Income and Expenses and Publication 551, Basis of Assets. You should be able to use the same basis tracking method as you would use for any other capital asset like stocks or bonds. Per Publication 550 \"\"How To Figure Gain or Loss\"\", You figure gain or loss on a sale or trade of property by comparing the amount you realize with the adjusted basis of the property. Gain. If the amount you realize from a sale or trade is more than the adjusted basis of the property you transfer, the difference is a gain. Loss. If the adjusted basis of the property you transfer is more than the amount you realize, the difference is a loss. That is, the assumption with property is that you would be using specific identification. There are specific rules for mutual funds to allow for using average cost or defaulting to FIFO, but for general \"\"property\"\", including individual stocks and bonds, there is just Specific Identification or FIFO (and FIFO is just making an assumption about what you're choosing to sell first in the absence of any further information). You don't need to track exactly \"\"which Bitcoin\"\" was sold in terms of exactly how the transactions are on the Bitcoin ledger, it's just that you bought x bitcoins on date d, and when you sell a lot of up to x bitcoins you specify in your own records that the sale was of those specific bitcoins that you bought on date d and report it on your tax forms accordingly and keep track of how much of that lot is remaining. It works just like with stocks, where once you buy a share of XYZ Corp on one date and two shares on another date, you don't need to track the movement of stock certificates and ensure that you sell that exact certificate, you just identify which purchase lot is being sold at the time of sale.\"",
"title": ""
}
] |
[
{
"docid": "ca5d202b93c164af5f61d58a5cd0aa01",
"text": "Here's what the GnuCash documentation, 10.5 Tracking Currency Investments (How-To) has to say about bookkeeping for currency exchanges. Essentially, treat all currency conversions in a similar way to investment transactions. In addition to asset accounts to represent holdings in Currency A and Currency B, have an foreign exchange expenses account and a capital gains/losses account (for each currency, I would imagine). Represent each foreign exchange purchase as a three-way split: source currency debit, foreign exchange fee debit, and destination currency credit. Represent each foreign exchange sale as a five-way split: in addition to the receiving currency asset and the exchange fee expense, list the transaction profit in a capital gains account and have two splits against the asset account of the transaction being sold. My problems with this are: I don't know how the profit on a currency sale is calculated (since the amount need not be related to any counterpart currency purchase), and it seems asymmetrical. I'd welcome an answer that clarifies what the GnuCash documentation is trying to say in section 10.5.",
"title": ""
},
{
"docid": "af3575f1faff6c617daffd493faa8815",
"text": "Lets look at possible use cases: If you ever converted your cryptocurrency to cash on a foreign exchange, then **YES** you had to report. That means if you ever daytraded and the US dollar (or other fiat) amount was $10,000 or greater when you went out of crypto, then you need to report. Because the regulations stipulate you need to report over $10,000 at any point in the year. If you DID NOT convert your cryptocurrency to cash, and only had them on an exchange's servers, perhaps traded for other cryptocurrency pairs, then NO this did not fall under the regulations. Example, In 2013 I wanted to cash out of a cryptocurrency that didn't have a USD market in the United States, but I didn't want to go to cash on a foreign exchange specifically for this reason (amongst others). So I sold my Litecoin on BTC-E (Slovakia) for Bitcoin, and then I sold the Bitcoin on Coinbase (USA). (even though BTC-E had a Litecoin/USD market, and then I could day trade the swings easily to make more capital gains, but I wanted cash in my bank account AND didn't want the reporting overhead). Read the regulations yourself. Financial instruments that are reportable: Cash (fiat), securities, futures and options. Also, http://www.bna.com/irs-no-bitcoin-n17179891056/ whether it is just in the blockchain or on a server, IRS and FINCEN said bitcoin is not reportable on FBAR. When they update their guidance, it'll be in the news. The director of FinCEN is very active in cryptocurrency developments and guidance. Bitcoin has been around for six years, it isn't that esoteric and the government isn't that confused on what it is (IRS and FinCEN's hands are tied by Congress in how to more realistically categorize cryptocurrency) Although at this point in time, there are several very liquid exchanges within the United States, such as the one NYSE/ICE hosts (Coinbase).",
"title": ""
},
{
"docid": "890e8e0a93a34ffc61874715ecaac7a2",
"text": "\"You say you want a more \"\"stable\"\" system. Recall from your introductory economics courses that money has three roles: a medium of exchange (here is $, give me goods), a unit of account (you owe me $; the business made $ last year), and a store of value (I have saved $ for the future!). I assume that you are mostly concerned with the store-of-value role being eroded due to inflation. But first consider that most people still want regular currency, so as a medium of exchange or accounting unit anything would face an uphill battle. If you discard that role for your currency, and only want to store value with it, you could just buy equities and commodities and baskets of currencies and debt in a brokerage account (possibly using mutual funds) to store your value. Trillions of dollars' worth of business takes place this way every year already. Virtual currency was a bit of a dot-com bubble thing. The systems which didn't go completely bust and are still around have been beset by money-laundering, and otherwise remain largely an ignored niche. An online fiat currency has the same basic problem that another currency has. You need to trust the central bank not to create more money and cause inflation (or even just abscond with the funds... or go bankrupt / get sued). Perhaps the Federal Reserve may be jerking us around on that front right now.... they're still a lot more believable than a small private institution. Some banks might possibly be trustworthy enough to launch a currency, but it's hard to see why they'd bother (it can't be a big profit center, because people aren't willing to pay too much to just use money.) And an online currency that's backed by commodities (e.g. gold) is going to be subject to potentially violent swings in the prices of commodities. Imagine getting a loan out for your house, denominated in terms of e-gold, and then the price of gold triples. Ouch?\"",
"title": ""
},
{
"docid": "0ff87b4504eaa0cf33d2b696582f47ef",
"text": "\"I think the \"\"right\"\" way to approach this is for your personal books and your business's books to be completely separate. You would need to really think of them as separate things, such that rather than being disappointed that there's no \"\"cross transactions\"\" between files, you think of it as \"\"In my personal account I invested in a new business like any other investment\"\" with a transfer from your personal account to a Stock or other investment account in your company, and \"\"This business received some additional capital\"\" which one handles with a transfer (probably from Equity) to its checking account or the like. Yes, you don't get the built-in checks that you entered the same dollar amount in each, but (1) you need to reconcile your books against reality anyway occasionally, so errors should get caught, and (2) the transactions really are separate things from each entity's perspective. The main way to \"\"hack it\"\" would be to have separate top-level placeholder accounts for the business's Equity, Income, Expenses, and Assets/Liabilities. That is, your top-level accounts would be \"\"Personal Equity\"\", \"\"Business Equity\"\", \"\"Personal Income\"\", \"\"Business Income\"\", and so on. You can combine Assets and Liabilities within a single top-level account if you want, which may help you with that \"\"outlook of my business value\"\" you're looking for. (In fact, in my personal books, I have in the \"\"Current Assets\"\" account both normal things like my Checking account, but also my credit cards, because once I spend the money on my credit card I want to think of the money as being gone, since it is. Obviously this isn't \"\"standard accounting\"\" in any way, but it works well for what I use it for.) You could also just have within each \"\"normal\"\" top-level placeholder account, a placeholder account for both \"\"Personal\"\" and \"\"My Business\"\", to at least have a consistent structure. Depending on how your business is getting taxed in your jurisdiction, this may even be closer to how your taxing authorities treat things (if, for instance, the business income all goes on your personal tax return, but on a separate form). Regardless of how you set up the accounts, you can then create reports and filter them to include just that set of business accounts. I can see how just looking at the account list and transaction registers can be useful for many things, but the reporting does let you look at everything you need and handles much better when you want to look through a filter to just part of your financial picture. Once you set up the reporting (and you can report on lists of account balances, as well as transaction lists, and lots of other things), you can save them as Custom Reports, and then open them up whenever you want. You can even just leave a report tab (or several) open, and switch to it (refreshing it if needed) just like you might switch to the main Account List tab. I suspect once you got it set up and tried it for a while you'd find it quite satisfactory.\"",
"title": ""
},
{
"docid": "b3ff2d91d58df55f959c18195cd1b5d0",
"text": "As BrenBarn stated, tracking fractional transactions beyond 8 decimal places makes no sense in the context of standard stock and mutual fund transactions. This is because even for the most expensive equities, those fractional shares would still not be worth whole cent amounts, even for account balances in the hundreds of thousands of dollars. One important thing to remember is that when dealing with equities the total cost, number of shares, and share price are all 3 components of the same value. Thus if you take 2 of those values, you can always calculate the third: (price * shares = cost, cost / price = shares, etc). What you're seeing in your account (9 decimal places) is probably the result of dividing uneven values (such as $9.37 invested in a commodity which trades for $235.11, results in 0.03985368550891072264046616477394 shares). Most brokerages will round this value off somewhere, yours just happens to include more decimal places than your financial software allows. Since your brokerage is the one who has the definitive total for your account balance, the only real solution is to round up or down, whichever keeps your total balance in the software in line with the balance shown online.",
"title": ""
},
{
"docid": "6057489b63d4a6078034e2f58b3fe5f7",
"text": "I'm not sure, but I think the monetary system of Second Life or World of Warcraft would correspond to what you are looking for. I don't think they are independent of the dollar though, since acquiring liquidity in those games can be done through exchange for real dollars. But there can be more closed systems, maybe Sim type games where this is not the case. I hope this helps.",
"title": ""
},
{
"docid": "db751b9cc469f547550a323044b23d8e",
"text": "For manual conversion you can use many sites, starting from google (type 30 USD in yuan) to sites like xe.com mentioned here. For programmatic conversion, you could use Google Calculator API or many other currency exchange APIs that are available. Beware however that if you do it on the real site, the exchange rate is different from actual rates used by banks and payment processing companies - while they use market-based rates, they usually charge some premium on currency conversion, meaning that if you have something for 30 dollars, according to current rate it may bet 198 yuan, but if he uses a credit card for purchase, it may cost him, for example, 204 yuan. You should be very careful about making difference between snapshot market rates and actual rates used in specific transaction.",
"title": ""
},
{
"docid": "ff6db88144e4c3dbec7e59ade40ecefc",
"text": "Using a different cost basis than your broker's reporting is NOT a problem. You need to keep your own records to account for this difference. Among the other many legitimate reasons to adjust your cost basis, the most popular is when you have two brokerage accounts and sell an asset in one then buy in another. This is called a Wash Sale and is not a taxable event for you. However from the perspective of each broker with their limited information you are making a transaction with tax implications and their reported 1099 will show as such. Links: https://www.firstinvestors.com/docs/pdf/news/tax-qa-2012.pdf",
"title": ""
},
{
"docid": "3200217e7939b7c9eb0a82e4a1124feb",
"text": "Here is the technical guidance from the accounting standard FRS 23 (IAS 21) 'The Effects of Changes in Foreign Exchange Rates' which states: Exchange differences arising on the settlement of monetary items or on translating monetary items at rates different from those at which they were translated on initial recognition during the period or in previous financial statements shall be recognised in profit or loss in the period in which they arise. An example: You agree to sell a product for $100 to a customer at a certain date. You would record the sale of this product on that date at $100, converted at the current FX rate (lets say £1:$1 for ease) in your profit loss account as £100. The customer then pays you several $100 days later, at which point the FX rate has fallen to £0.5:$1 and you only receive £50. You would then have a realised loss of £50 due to exchange differences, and this is charged to your profit and loss account as a cost. Due to double entry bookkeeping the profit/loss on the FX difference is needed to balance the journals of the transaction. I think there is a little confusion as to what constitutes a (realised) profit/loss on exchange difference. In the example in your question, you are not making any loss when you convert the bitcoins to dollars, as there is no difference in the exchange rate between the point you convert them. Therefore you have not made either a profit or a loss. In terms of how this effects your tax position; you only pay tax on your profit and loss account. The example I give above is an instance where an exchange difference is recorded to the P&L. In your example, the value of your cash held is reflected in your balance sheet, as an asset, whatever its value is at the balance sheet date. Unfortunately, the value of the asset can rise/fall, but the only time where you will record a profit/loss on this (and therefore have an impact on tax) is if you sell the asset.",
"title": ""
},
{
"docid": "9c7310340478610eea3f1d4b154baaf6",
"text": "\"As far as I can tell there are no \"\"out-of-the-box\"\" solutions for this. Nor will Moneydance or GnuCash give you the full solution you are looking for. I imaging people don't write a well-known, open-source, tool that will do this for fear of the negative uses it could have, and the resulting liability. You can roll-you-own using the following obscure tools that approximate a solution: First download the bank's CSV information: http://baruch.ev-en.org/proj/gnucash.html That guy did it with a perl script that you can modify. Then convert the result to OFX for use elsewhere: http://allmybrain.com/2009/02/04/converting-financial-csv-data-to-ofx-or-qif-import-files/\"",
"title": ""
},
{
"docid": "9758a5c6885e6ddfe6022e9eb530ab12",
"text": "\"According to the following article the answer is \"\"first-in, first-out\"\": http://smallbusiness.chron.com/calculate-cost-basis-stock-multiple-purchases-21588.html According to the following article the last answer was just one option an investor can choose: https://www.usaa.com/inet/pages/advice-investing-costbasis?akredirect=true\"",
"title": ""
},
{
"docid": "8568a818f3a0c4a7473017be99a53d48",
"text": "\"I found an answer by Peter Selinger, in two articles, Tutorial on multiple currency accounting (June 2005, Jan 2011) and the accompanying Multiple currency accounting in GnuCash (June 2005, Feb 2007). Selinger embraces the currency neutrality I'm after. His method uses \"\"[a]n account that is denominated as a difference of multiple currencies... known as a currency trading account.\"\" Currency trading accounts show the gain or loss based on exchange rates at any moment. Apparently GnuCash 2.3.9 added support for multi-currency accounting. I haven't tried this myself. This feature is not enabled by default, and must be turned on explicity. To do so, check \"\"Use Trading Accounts\"\" under File -> Properties -> Accounts. This must be done on a per-file basis. Thanks to Mike Alexander, who implemented this feature in 2007, and worked for over 3 years to convince the GnuCash developers to include it. Older versions of GnuCash, such as 1.8.11, apparently had a feature called \"\"Currency Trading Accounts\"\", but they behaved differently than Selinger's method.\"",
"title": ""
},
{
"docid": "f469aad776f005ed531a025b282f05ad",
"text": "This is great! I'm not a CPA, but work in finance. As such, my course/professional work is focused more on the economic and profitability aspects of transfer pricing. As you might imagine, it tended to analyze corporate strategy decisions under various cost allocation models, which you thoroughly discuss. I would agree with the statement that it is based on the matching principle but would like to add that transfer pricing is interesting as it falls under several fields: accounting, finance, and economics. Fundamentally it is based on the matching principal, but it's real world applications are based on all three (it's often used to determine divisional and even individual sales peoples profitability; as is the case with bank related funds transfer pricing on stuff like time deposits). In this case, the correct accounting principal allows you to, when done properly, better understand the economics, strategy, and operations of an organization. In effect, when done correctly, it provides transparency for strategic decision making to executives. As I said, since my coursework tended to focus more on that aspect, I definitely have a natural tendency towards it. This is an amazing explanation (esp. about interest on M&A bridge loans, I get that) of the more detailed stuff! Truthfully, I'm not as familiar with it and was just trying to show more of the conceptual than nitty-gritty. Thanks for the reply!",
"title": ""
},
{
"docid": "3e6282fb122d5582ccfa8b6a505152c3",
"text": "\"Stocks, as an asset, represent the sum of the current market value of all of your holdings. If your portfolio is showing unrealized gains and losses, then that net amount is inherently reflected in the current market value of your holdings. That's not to say cost basis is not important. Any closed trades, realized gains or losses, will of course have an impact on your taxable income. So, it couldn't hurt to keep track of your cost basis from a tax standpoint, but understand that the term \"\"asset\"\" refers to the current market values and does not consider base amounts. Taxes do. Perhaps consider making separate cells for cost basis, but also bear in mind that most if not all of the major online discount brokers will provide transferring of cost basis information electronically to the major online tax service providers.\"",
"title": ""
},
{
"docid": "e4f3cdc40f6813b9e28e7ef4024c433d",
"text": "You just explained why I'm boggled that some companies an individuals are willing to utilize such an erratic currency for exchanges. I tried using it as a payment method in my previous services company, but was understanding most people wouldn't and that it could potentially be loss for myself. Which on a smaller scale isn't really a issue, but scale up to some of the larger companies and it could be devastating. It's an entirely speculative market that could depreciate returns quickly.",
"title": ""
}
] |
fiqa
|
f6550cba69bf8757a207033593f65fa0
|
Exchange rate $ ETF,s
|
[
{
"docid": "505ca7e221596c6b8fd0ab08c320d875",
"text": "Your assumption that funds sold in GBP trade in GBP is incorrect. In general funds purchase their constituent stocks in the fund currency which may be different to the subscription currency. Where the subscription currency is different from the fund currency subscriptions are converted into the fund currency before the extra money is used to increase holdings. An ETF, on the other hand, does not take subscriptions directly but by creation (and redemption) of shares. The principle is the same however; monies received from creation of ETF shares are converted into the fund currency and then used to buy stock. This ensures that only one currency transaction is done. In your specific example the fund currency will be USD so your purchase of the shares (assuming there are no sellers and creation occurs) will be converted from GBP to USD and held in that currency in the fund. The fund then trades entirely in USD to avoid currency risk. When you want to sell your exposure (supposing redemption occurs) enough holdings required to redeem your money are sold to get cash in USD and then converted to GBP before paying you. This means that trading activity where there is no need to convert to GBP (or any other currency) does not incur currency conversion costs. In practice funds will always have some cash (or cash equivalents) on hand to pay out redemptions and will have an idea of the number and size of redemptions each calendar period so will use futures and swaps to mitigate FX risk. Where the same firm has two funds traded in different currencies with the same objectives it is likely that one is a wrapper for the other such that one simply converts the currency and buys the other currency denominated ETF. As these are exchange traded funds with a price in GBP the amount you pay for the ETF or gain on selling it is the price given and you will not have to consider currency exchange as that should be done internally as explained above. However, there can be a (temporary) arbitrage opportunity if the price in GBP does not reflect the price in USD and the exchange rate put together.",
"title": ""
}
] |
[
{
"docid": "d37d9a994626f347749725d7d6066a17",
"text": "With the disclaimer that I am not a technician, I'd answer yes, it does. SPY (for clarification, an ETF that reflects the S&P 500 index) has dividends, and earnings, therefore a P/E and dividend yield. It would follow that the tools technicians use, such as moving averages, support and resistance levels also apply. Keep in mind, each and every year, one can take the S&P stocks and break them up, into quintiles or deciles based on return and show that not all stock move in unison. You can break up by industry as well which is what the SPDRs aim to do, and observe the movement of those sub-groups. But, no, not all the stocks will perform the way the index is predicted to. (Note - If a technician wishes to correct any key points here, you are welcome to add a note, hopefully, my answer was not biased)",
"title": ""
},
{
"docid": "7a2e015368c0e58fe28b560c29c9ef5f",
"text": "\"Ask your trading site for their definition of \"\"ETF\"\". The term itself is overloaded/ambiguous. Consider: If \"\"ETF\"\" is interpreted liberally, then any fund that trades on a [stock] exchange is an exchange-traded fund. i.e. the most literal meaning implied by the acronym itself. Whereas, if \"\"ETF\"\" is interpreted more narrowly and in the sense that most market participants might use it, then \"\"ETF\"\" refers to those exchange-traded funds that specifically have a mechanism in place to ensure the fund's current price remains close to its net asset value. This is not the case with closed-end funds (CEFs), which often trade at either a premium or a discount to their underlying net asset value.\"",
"title": ""
},
{
"docid": "66b6d7651ba92fdc726761af5e89c6f9",
"text": "\"I made an investing mistake many (eight?) years ago. Specifically, I invested a very large sum of money in a certain triple leveraged ETF (the asset has not yet been sold, but the value has decreased to maybe one 8th or 5th of the original amount). I thought the risk involved was the volatility--I didn't realize that due to the nature of the asset the value would be constantly decreasing towards zero! Anyhow, my question is what to do next? I would advise you to sell it ASAP. You didn't mention what ETF it is, but chances are you will continue to lose money. The complicating factor is that I have since moved out of the United States and am living abroad (i.e. Japan). I am permanent resident of my host country, I have a steady salary that is paid by a company incorporated in my host country, and pay taxes to the host government. I file a tax return to the U.S. Government each year, but all my income is excluded so I do not pay any taxes. In this way, I do not think that I can write anything off on my U.S. tax return. Also, I have absolutely no idea if I would be able to write off any losses on my Japanese tax return (I've entrusted all the family tax issues to my wife). Would this be possible? I can't answer this question but you seem to be looking for information on \"\"cross-border tax harvesting\"\". If Google doesn't yield useful results, I'd suggest you talk to an accountant who is familiar with the relevant tax codes. Are there any other available options (that would not involve having to tell my wife about the loss, which would be inevitable if I were to go the tax write-off route in Japan)? This is off topic but you should probably have an honest conversation with your wife regardless. If I continue to hold onto this asset the value will decrease lower and lower. Any suggestions as to what to do? See above: close your position ASAP For more information on the pitfalls of leveraged ETFs (FINRA) What happens if I hold longer than one trading day? While there may be trading and hedging strategies that justify holding these investments longer than a day, buy-and-hold investors with an intermediate or long-term time horizon should carefully consider whether these ETFs are appropriate for their portfolio. As discussed above, because leveraged and inverse ETFs reset each day, their performance can quickly diverge from the performance of the underlying index or benchmark. In other words, it is possible that you could suffer significant losses even if the long-term performance of the index showed a gain.\"",
"title": ""
},
{
"docid": "73f0f5884654654b0658b3caef2f0620",
"text": "You will most likely not be able to avoid some form of format conversion, regardless of which data you use since there is, afaik, no standard for this data and everyone exports it differently. One viable option would be, like you said yourself, using the free data provided by Dukascopy. Please take into consideration that those are spot currency rates and will most likely not represent the rate at which physical and business-related exchange would have happened at this time.",
"title": ""
},
{
"docid": "5bfedbdd63f74534043d2d59fcef16b4",
"text": "Like others have said, mutual funds don't have an intraday NAV, but their ETF equivalents do. Use something like Yahoo Finance and search for the ETF.IV. For example VOO.IV. This will give you not the ETF price (which may be at a premium or discount), but the value of the underlying securities updated every 15 seconds.",
"title": ""
},
{
"docid": "5c00f8c665e4ec0b23f34c604d02a242",
"text": "\"Without going into minor details, an FX transaction works essentially like this. Let's assume you have SEK 100 on your account. If you buy 100 USD/RUB at 1.00, then that transaction creates a positive cash balance on your account of USD 100 and a negative cash balance (an overdraft) of RUB 100. So right after the transaction (assuming there is not transaction cost), the \"\"net equity\"\" of your account is: 100 SEK + 100 USD - 100 RUB = 100 + 100 - 100 = 100 SEK. Let's say that, the day after, the RUB has gone down by 10% and the RUB 100 is now worth SEK 90 only. Your new equity is: 100 SEK + 100 USD - 100 RUB = 100 + 100 - 90 = 110 SEK and you've made 10%(*): congrats! Had you instead bought 100 SEK/RUB, the result would have been the same (assuming the USD/SEK rate constant). In practice the USD/SEK rate would probably not be constant and you would need to also account for: (*) in your example, the USD/RUB has gone up 10% but the RUB has gone down 9.09%, hence the result you find. In my example, the RUB has gone down 10% (i.e. the USD has gone up 11%).\"",
"title": ""
},
{
"docid": "3623cb3175230cdde8f3cf5abed78175",
"text": "\"Following comments to your question here, you posted a separate question about why SPY, SPX, and the options contract don't move perfectly together. That's here Why don't SPY, SPX, and the e-mini s&p 500 track perfectly with each other? I provided an answer to that question and will build on it to answer what I think you're asking on this question. Specifically, I explained what it means that these are \"\"all based on the S&P.\"\" Each is a different entity, and different market forces keep them aligned. I think talking about \"\"technicals\"\" on options contracts is going to be too confusing since they are really a very different beast based on forward pricing models, so, for this question, I'll focus on only SPY and SPX. As in my other answer, it's only through specific market forces (the creation / redemption mechanism that I described in my other answer), that they track at all. There's nothing automatic about this and it has nothing to do with some issuer of SPY actually holding stock in the companies that comprise the SPX index. (That's not to say that the company does or doesn't hold, just that this doesn't drive the prices.) What ever technical signals you're tracking, will reflect all of the market forces at play. For SPX (the index), that means some aggregate behavior of the component companies, computed in a \"\"mathematically pure\"\" way. For SPY (the ETF), that means (a) the behavior of SPX and (b) the behavior of the ETF as it trades on the market, and (c) the action of the authorized participants. These are simply different things. Which one is \"\"right\"\"? That depends on what you want to do. In theory you might be able to do some analysis of technical signals on SPY and SPX and, for example, use that to make money on the way that they fail to track each other. If you figure out how to do that, though, don't post it here. Send it to me directly. :)\"",
"title": ""
},
{
"docid": "9ea59d67dcb34045c7694a346a08d840",
"text": "SeekingAlpha has a section dedicated to Short ETFs as well as others. In there you will find SH, and SDS. Both of which are inverse to the S&P 500. Edit: I linked to charts that compare SH and SDS to SPY.",
"title": ""
},
{
"docid": "43dc85864d4e91c60c56b2e9969d2747",
"text": "You have stumbled upon a classic trading strategy known as the carry trade. Theoretically you'd expect the exchange rate to move against you enough to make this a bad investment. In reality this doesn't happen (on average). There are even ETFs that automate the process for you (and get better transaction costs and lending/borrowing rates than you ever could): DBV and ICI.",
"title": ""
},
{
"docid": "bccb1b02a9eed71eb46edafd42e96639",
"text": "Do you have a good reason for keeping a US bank account? If not, I would close it and transfer to your Canadian bank account just to simplify your life. Unless you are investing on the scale of George Soros you shouldn't be worrying too much about exchange rates.",
"title": ""
},
{
"docid": "9e424bb3b0e7f90e3c589ee4b3890f1e",
"text": "\"When you hold units of the DLR/DLR.U (TSX) ETF, you are indirectly holding U.S. dollars cash or cash equivalents. The ETF can be thought of as a container. The container gives you the convenience of holding USD in, say, CAD-denominated accounts that don't normally provide for USD cash balances. The ETF price ($12.33 and $12.12, in your example) simply reflects the CAD price of those USD, and the change is because the currencies moved with respect to each other. And so, necessarily, given how the ETF is made up, when the value of the U.S. dollar declines vs. the Canadian dollar, it follows that the value of your units of DLR declines as quoted in Canadian dollar terms. Currencies move all the time. Similarly, if you held the same amount of value in U.S. dollars, directly, instead of using the ETF, you would still experience a loss when quoted in Canadian dollar terms. In other words, whether or not your U.S. dollars are tied up either in DLR/DLR.U or else sitting in a U.S. dollar cash balance in your brokerage account, there's not much of a difference: You \"\"lose\"\" Canadian dollar equivalent when the value of USD declines with respect to CAD. Selling, more quickly, your DLR.U units in a USD-denominated account to yield U.S. dollars that you then directly hold does not insulate you from the same currency risk. What it does is reduce your exposure to other cost/risk factors inherent with ETFs: liquidity, spreads, and fees. However, I doubt that any of those played a significant part in the change of value from $12.33 to $12.12 that you described.\"",
"title": ""
},
{
"docid": "90b990119812669ab920916a9ac08514",
"text": "\"When you invest in an S&P500 index fund that is priced in USD, the only major risk you bear is the risk associated with the equity that comprises the index, since both the equities and the index fund are priced in USD. The fund in your question, however, is priced in EUR. For a fund like this to match the performance of the S&P500, which is priced in USD, as closely as possible, it needs to hedge against fluctuations in the EUR/USD exchange rate. If the fund simply converted EUR to USD then invested in an S&P500 index fund priced in USD, the EUR-priced fund may fail to match the USD-priced fund because of exchange rate fluctuations. Here is a simple example demonstrating why hedging is necessary. I assumed the current value of the USD-priced S&P500 index fund is 1,600 USD/share. The exchange rate is 1.3 USD/EUR. If you purchase one share of this index using EUR, you would pay 1230.77 EUR/share: If the S&P500 increases 10% to 1760 USD/share and the exchange rate remains unchanged, the value of the your investment in the EUR fund also increases by 10% (both sides of the equation are multiplied by 1.1): However, the currency risk comes into play when the EUR/USD exchange rate changes. Take the 10% increase in the price of the USD index occurring in tandem with an appreciation of the EUR to 1.4 USD/EUR: Although the USD-priced index gained 10%, the appreciation of the EUR means that the EUR value of your investment is almost unchanged from the first equation. For investments priced in EUR that invest in securities priced in USD, the presence of this additional currency risk mandates the use of a hedge if the indexes are going to track. The fund you linked to uses swap contracts, which I discuss in detail below, to hedge against fluctuations in the EUR/USD exchange rate. Since these derivatives aren't free, the cost of the hedge is included in the expenses of the fund and may result in differences between the S&P500 Index and the S&P 500 Euro Hedged Index. Also, it's important to realize that any time you invest in securities that are priced in a different currency than your own, you take on currency risk whether or not the investments aim to track indexes. This holds true even for securities that trade on an exchange in your local currency, like ADR's or GDR's. I wrote an answer that goes through a simple example in a similar fashion to the one above in that context, so you can read that for more information on currency risk in that context. There are several ways to investors, be they institutional or individual, can hedge against currency risk. iShares offers an ETF that tracks the S&P500 Euro Hedged Index and uses a over-the-counter currency swap contract called a month forward FX contract to hedge against the associated currency risk. In these contracts, two parties agree to swap some amount of one currency for another amount of another currency, at some time in the future. This allows both parties to effectively lock in an exchange rate for a given time period (a month in the case of the iShares ETF) and therefore protect themselves against exchange rate fluctuations in that period. There are other forms of currency swaps, equity swaps, etc. that could be used to hedge against currency risk. In general, two parties agree to swap one quantity, like a EUR cash flow, payments of a fixed interest rate, etc. for another quantity, like a USD cash flow, payments based on a floating interest rate, etc. In many cases these are over-the-counter transactions, there isn't necessarily a standardized definition. For example, if the European manager of a fund that tracks the S&P500 Euro Hedged Index is holding euros and wants to lock in an effective exchange rate of 1.4 USD/EUR (above the current exchange rate), he may find another party that is holding USD and wants to lock in the respective exchange rate of 0.71 EUR/USD. The other party could be an American fund manager that manages a USD-price fund that tracks the FTSE. By swapping USD and EUR, both parties can, at a price, lock in their desired exchange rates. I want to clear up something else in your question too. It's not correct that the \"\"S&P 500 is completely unrelated to the Euro.\"\" Far from it. There are many cases in which the EUR/USD exchange rate and the level of the S&P500 index could be related. For example: Troublesome economic news in Europe could cause the euro to depreciate against the dollar as European investors flee to safety, e.g. invest in Treasury bills. However, this economic news could also cause US investors to feel that the global economy won't recover as soon as hoped, which could affect the S&P500. If the euro appreciated against the dollar, for whatever reason, this could increase profits for US businesses that earn part of their profits in Europe. If a US company earns 1 million EUR and the exchange rate is 1.3 USD/EUR, the company earns 1.3 million USD. If the euro appreciates against the dollar to 1.4 USD/EUR in the next quarter and the company still earns 1 million EUR, they now earn 1.4 million USD. Even without additional sales, the US company earned a higher USD profit, which is reflected on their financial statements and could increase their share price (thus affecting the S&P500). Combining examples 1 and 2, if a US company earns some of its profits in Europe and a recession hits in the EU, two things could happen simultaneously. A) The company's sales decline as European consumers scale back their spending, and B) the euro depreciates against the dollar as European investors sell euros and invest in safer securities denominated in other currencies (USD or not). The company suffers a loss in profits both from decreased sales and the depreciation of the EUR. There are many more factors that could lead to correlation between the euro and the S&P500, or more generally, the European and American economies. The balance of trade, investor and consumer confidence, exposure of banks in one region to sovereign debt in another, the spread of asset/mortgage-backed securities from US financial firms to European banks, companies, municipalities, etc. all play a role. One example of this last point comes from this article, which includes an interesting line: Among the victims of America’s subprime crisis are eight municipalities in Norway, which lost a total of $125 million through subprime mortgage-related investments. Long story short, these municipalities had mortgage-backed securities in their investment portfolios that were derived from, far down the line, subprime mortgages on US homes. I don't know the specific cities, but it really demonstrates how interconnected the world's economies are when an American family's payment on their subprime mortgage in, say, Chicago, can end up backing a derivative investment in the investment portfolio of, say, Hammerfest, Norway.\"",
"title": ""
},
{
"docid": "65a80f2facea4fe99eb9be9f03da3d0d",
"text": "Does the Spanish market, or any other market in euroland, have the equivalent of ETF's? If so there ought to be one that is based on something like the US S&P500 or Russell 3000. Otherwise you might check for local offices of large mutual fund companies such as Vanguard, Schwab etc to see it they have funds for sale there in Spain that invest in the US markets. I know for example Schwab has something for Swiss residents to invest in the US market. Do bear in mind that while the US has a stated policy of a 'strong dollar', that's not really what we've seen in practice. So there is substantial 'currency risk' of the dollar falling vs the euro, which could result in a loss for you. (otoh, if the Euro falls out of bed, you'd be sitting pretty.) Guess it all depends on how good your crystal ball is.",
"title": ""
},
{
"docid": "e6a3340c925cebe9771d4f0abb64fb8b",
"text": "When you want to invest in an asset denominated by a foreign currency, your investment is going to have some currency risk to it. You need to worry not just about what happens to your own currency, but also the foreign currency. Lets say you want to invest $10000 in US Stocks as a Canadian. Today that will cost you $13252, since USDCAD just hit 1.3252. You now have two ways you can make money. One is if USDCAD goes up, two is if the stocks go up. The former may not be obvious, but remember, you are holding US denominated assets currently, with the intention of one day converting those assets back into CAD. Essentially, you are long USDCAD (long USD short CAD). Since you are short CAD, if CAD goes up it hurts you It may seem odd to think about this as a currency trade, but it opens up a possibility. If you want a foreign investment to be currency neutral, you just make the opposite currency trade, in addition to your original investment. So in this case, you would buy $10,000 in US stocks, and then short USDCAD (ie long CAD, short USD $10,000). This is kind of savvy and may not be something you would do. But its worth mentioning. And there are also some currency hedged ETFs out there that do this for you http://www.ishares.com/us/strategies/hedge-currency-impact However most are hedged relative to USD, and are meant to hedge the target countries currency, not your own.",
"title": ""
},
{
"docid": "625a988bfb55940701a041358b283f3b",
"text": "Some of the ETFs you have specified have been delisted and are no longer trading. If you want to invest in those specific ETFs, you need to find a broker that will let you buy European equities such as those ETFs. Since you mentioned Merrill Edge, a discount broking platform, you could also consider Interactive Brokers since they do offer trading on the London Stock Exchange. There are plenty more though. Beware that you are now introducing a foreign exchange risk into your investment too and that taxation of capital returns/dividends may be quite different from a standard US-listed ETF. In the US, there are no Islamic or Shariah focussed ETFs or ETNs listed. There was an ETF (JVS) that traded from 2009-2010 but this had such little volume and interest, the fees probably didn't cover the listing expenses. It's just not a popular theme for North American listings.",
"title": ""
}
] |
fiqa
|
6706a08de83519e7ed61cd39ea073ba0
|
If a mutual fund did really well last year, then statistically speaking, is it likely going to do bad this year?
|
[
{
"docid": "451a1147ad21efe2f898c5a001fd5c8a",
"text": "\"This can be answered by looking at the fine print for any prospectus for any stock, bond or mutual fund. It says: \"\"Past performance is not an indicator of future performance.\"\". A mutual fund is a portfolio of common stocks, managed by somebody for a fee. There are many factors that can drive performance of a fund up or down. Here are a few: I'm sure there are many more market influences that I cannot think of that push fund prices up or down. What the fund did last year is not one of them. If it were, making money in the mutual fund market would be as easy as investing in last year's winners and everyone would be doing it.\"",
"title": ""
},
{
"docid": "d10497d2ccd984e2f58e17332f779a50",
"text": "Nearly all long-lived active funds underperform the market over the long run. The best they can hope for in almost all cases is to approximate the market return. Considering that the market return is ~9%, this fund should be expected to do less well. In terms of predicting future performance, if its average return is greater than the average market return, its future average return can be expected to fall.",
"title": ""
},
{
"docid": "e73661e3b17aa7ca2d29a1cf8d4133db",
"text": "From a mathematical point of view the stats do not change depending on past performance. Just because a fund is lucky one year doesn't mean that it will be unlucky the next. Consider tossing a coin, the chance of heads is 50%. If you have just thrown 3 heads, the chance of heads is still 50%. It doesn't go down. If you throw 10 heads in a row the chance of a heads is still 50%, in fact you many suspect there is something odd about the coin, if it was an unfair coin then the chance of a heads would be higher than 50%. It could be the fund is better run, but there could be other reasons, including random chance. Some funds will randomly do better and some will randomly do worse What you do know is that if they did better than average other funds have done worse, at least for last year.",
"title": ""
}
] |
[
{
"docid": "148fe3c6b836d3b733d3f1f75a6f917a",
"text": "\"In the case of a specific fund, I'd be tempted to get get an annual report that would disclose distribution data going back up to 5 years. The \"\"View prospectus and reports\"\" would be the link on the site to note and use that to get to the PDF of the report to get the data that was filed with the SEC as that is likely what matters more here. Don't forget that mutual fund distributions can be a mix of dividends, bond interest, short-term and long-term capital gains and thus aren't quite as simple as stock dividends to consider here.\"",
"title": ""
},
{
"docid": "b657b0aaeb73a927b1064e620421854b",
"text": "So you're saying as long as they are always slightly up investors will be happy? I can believe that. Also, here is some evidence to support you. http://www.hedgeweek.com/2017/07/20/254194/hfr-reports-hedge-funds-allocations-beat-redemptions Though you have to admit negative returns must be especially hard to own to your investors when everything is up",
"title": ""
},
{
"docid": "5394995b18736e3123af489412bcab30",
"text": "\"My two cents: I am a pension actuary and see the performance of funds on a daily basis. Is it normal to see down years? Yes, absolutely. It's a function of the directional bias of how the portfolio is invested. In the case of a 401(k) that almost always mean a positive directional bias (being long). Now, in your case I see two issues: The amount of drawdown over one year. It is atypical to have a 14% loss in a little over a year. Given the market conditions, this means that you nearly experienced the entire drawdown of the SP500 (which your portfolio is highly correlated to) and you have no protection from the downside. The use of so-called \"\"target-date funds\"\". Their very implication makes no sense. Essentially, they try to generate a particular return over the elapsed time until retirement. The issue is that the market is by all statistical accounts random with positive drift (it can be expected to move up in the long term). This positive drift is due to the fact that people should be paid to take on risk. So if you need the money 20 years from now, what's the big deal? Well, the issue is that no one, and I repeat, no one, knows when the market will experience long down moves. So you happily experience positive drift for 20 years and your money grows to a decent size. Then, right before you retire, the market shaves 20%+ of your investments. Will you recoup these damages? Most likely yes. But will that be in the timeframe you need? The market doesn't care if you need money or not. So, here is my advice if you are comfortable taking control of your money. See if you can roll your money into an IRA (some 401(k) plans will permit this) or, if you contribute less that the 401(k) contribution limit you make want to just contribute to an IRA (be mindful of the annual limits). In this case, you can set up a self-directed account. Here you will have the flexibility to diversify and take action as necessary. And by diversify, I don't mean that \"\"buy lots of different stuff\"\" garbage, I mean focus on uncorrelated assets. You can get by on a handful of ETFs (SPY, TLT, QQQ, ect.). These all have liquid options available. Once you build a base, you can lower basis by writing covered calls against these positions. This is allowed in almost all IRA accounts. In my opinion, and I see this far too often, your potential and drive to take control of your assets is far superior than the so called \"\"professionals or advisors\"\". They will 99% of the time stick you in a target date fund and hope that they make their basis points on your money and retire before you do. Not saying everyone is unethical, but its hard to care about your money more than you will.\"",
"title": ""
},
{
"docid": "bff4b865a1719e435d5065a66da76fe1",
"text": "It means someone's getting paid too much. I'd check the sharpe ratio and compare that to similar funds along with their expense ratio. So in some scenarios it's not necessarily a bad thing but being informed is the important thing",
"title": ""
},
{
"docid": "afe6a50f6ffa99608a6aa9f1d64bd178",
"text": "Could somebody explain to me exactly why the writer doesn't think this is a win for passive investing? Aren't 'this could happen' statements only relevant to active managers so if you already believe that active investing is more successful than passive then of course you'll just fit this situation to 'there is still potential for major loss, the S&P has tanked x many times' because you believe that there are predictable patterns in markets while the passive investor says that isn't true.",
"title": ""
},
{
"docid": "5d7736255f034e29a930b7eab8d3047c",
"text": "\"Forecasts of stock market direction are not reliable, so you shouldn't be putting much weight on them. Long term, you can expect to do better in stocks, but obtaining this better expected return has the danger of \"\"buying in\"\" to the market at a particularly bad moment, leading to a substantially lower return. So mitigate that risk while moving in a big piece of cash by \"\"dollar cost averaging\"\". An example would be to divide your cash hoard (conceptually) into say six pieces, and invest each piece in the index fund two months apart. After a year you will have invested the whole sum at about the average of the index for the year.\"",
"title": ""
},
{
"docid": "cc4f2ceb7e54a35240e350b1fc1ff93a",
"text": "Terminology aside. Your gains for this year in a mutual fund do seem low. These are things that can be quickly, and precisely answered through a conversation with your broker. You can request info on the performance of the fund you are invested in from the broker. They are required to disclose this information to you. They can give you the performance of the fund overall, as well as break down for you the specific stocks and bonds that make up the fund, and how they are performing. Talk about what kind of fund it is. If your projected retirement date is far in the future your fund should probably be on the aggressive side. Ask what the historic average is for the fund you're in. Ask about more aggressive funds, or less if you prefer a lower average but more stable performance. Your broker should be able to adequately, and in most cases accurately, set your expectation. Also ask about fees. Good brokerages charge reasonable fees, that are typically based on the gains the fund makes, not your total investment. Make sure you understand what you are paying. Even without knowing the management fees, your growth this year should be of concern. It is exceptionally low, in a year that showed good gains in many market sectors. Speak with your broker and decide if you will stick with this fund or have your IRA invest in a different fund. Finally JW8 makes a great point, in that your fund may perform well or poorly over any given short term, but long term your average should fall within the expected range for the type of fund you're invested in (though, not guaranteed). MOST importantly, actually talk to your broker. Get real answers, since they are as easy to come by as posting on stack.",
"title": ""
},
{
"docid": "d551a112c05e7e4ad3cf68a202c506dc",
"text": "That is such a vague statement, I highly recommend disregarding it entirely, as it is impossible to know what they meant. Their goal is to convince you that index funds are the way to go, but depending on what they consider an 'active trader', they may be supporting their claim with irrelevant data Their definition of 'active trader' could mean any one or more of the following: 1) retail investor 2) day trader 3) mutual fund 4) professional investor 5) fund continuously changing its position 6) hedge fund. I will go through all of these. 1) Most retail traders lose money. There are many reasons for this. Some rely on technical strategies that are largely unproven. Some buy rumors on penny stocks in hopes of making a quick buck. Some follow scammers on twitter who sell newsletters full of bogus stock tips. Some cant get around the psychology of trading, and thus close out losing positions late and winning positions early (or never at all) [I myself use to do this!!]. I am certain 99% of retail traders cant beat the market, because most of them, to be frank, put less effort into deciding what to trade than in deciding what to have for lunch. Even though your pension funds presentation is correct with respect to retail traders, it is largely irrelevant as professionals managing your money should not fall into any of these traps. 2) I call day traders active traders, but its likely not what your pension fund was referring to. Day trading is an entirely different animal to long or medium term investing, and thus I also think the typical performance is irrelevant, as they are not going to manage your money like a day trader anyway. 3,4,5) So the important question becomes, do active funds lose 99% of the time compared to index funds. NO! No no no. According to the WSJ, actively managed funds outperformed passive funds in 2007, 2009, 2013, 2015. 2010 was basically a tie. So 5 out of 9 years. I dont have a calculator on me but I believe that is less than 99%! Whats interesting is that this false belief that index funds are always better has become so pervasive that you can see active funds have huge outflows and passive have huge inflows. It is becoming a crowded trade. I will spare you the proverb about large crowds and small doors. Also, index funds are so heavily weighted towards a handful of stocks, that you end up becoming a stockpicker anyway. The S&P is almost indistinguishable from AAPL. Earlier this year, only 6 stocks were responsible for over 100% of gains in the NASDAQ index. Dont think FB has a good long term business model, or that Gilead and AMZN are a cheap buy? Well too bad if you bought QQQ, because those 3 stocks are your workhorses now. See here 6) That graphic is for mutual funds but your pension fund may have also been including hedge funds in their 99% figure. While many dont beat their own benchmark, its less than 99%. And there are reasons for it. Many have investors that are impatient. Fortress just had to close one of its funds, whose bets may actually pay off years from now, but too many people wanted their money out. Some hedge funds also have rules, eg long only, which can really limit your performance. While important to be aware of this, that placing your money with a hedge fund may not beat a benchmark, that does not automatically mean you should go with an index fund. So when are index funds useful? When you dont want to do any thinking. When you dont want to follow market news, at all. Then they are appropriate.",
"title": ""
},
{
"docid": "8b4d4b2faa01a03c992d0834a7b6d2f1",
"text": "Stock index funds are likely, but not certainly, to be a good long-term investment. In countries other than the USA, there have been 30+ year periods where stocks either underperformed compared to bonds, or even lost value in absolute terms. This suggests that it may be an overgeneralization to assume that they always do well in the long term. Furthermore, it may suggest that they are persistently overvalued for the risk, and perhaps due for a long-term correction. (If everybody assumes they're safe, the equity risk premium is likely to be eaten up.) Putting all of your money into them would, for most people, be taking an unnecessary risk. You should cover some other asset classes too. If stocks do very well, a portfolio with some allocation to more stable assets will still do fairly well. If they crash, a portfolio with less risky assets will have a better chance of being at least adequate.",
"title": ""
},
{
"docid": "5d2b124795bc36a1421cb615e4b3ab19",
"text": "\"Can you easily stomach the risk of higher volatility that could come with smaller stocks? How certain are you that the funds wouldn't have any asset bloat that could cause them to become large-cap funds for holding to their winners? If having your 401(k) balance get chopped in half over a year doesn't give you any pause or hesitation, then you have greater risk tolerance than a lot of people but this is one of those things where living through it could be interesting. While I wouldn't be against the advice, I would consider caution on whether or not the next 40 years will be exactly like the averages of the past or not. In response to the comments: You didn't state the funds so I how I do know you meant index funds specifically? Look at \"\"Fidelity Low-Priced Stock\"\" for a fund that has bloated up in a sense. Could this happen with small-cap funds? Possibly but this is something to note. If you are just starting to invest now, it is easy to say, \"\"I'll stay the course,\"\" and then when things get choppy you may not be as strong as you thought. This is just a warning as I'm not sure you get my meaning here. Imagine that some women may think when having a child, \"\"I don't need any drugs,\"\" and then the pain comes and an epidural is demanded because of the different between the hypothetical and the real version. While you may think, \"\"I'll just turn the cheek if you punch me,\"\" if I actually just did it out of the blue, how sure are you of not swearing at me for doing it? Really stop and think about this for a moment rather than give an answer that may or may not what you'd really do when the fecal matter hits the oscillator. Couldn't you just look at what stocks did the best in the last 10 years and just buy those companies? Think carefully about what strategy are you using and why or else you could get tossed around as more than a few things were supposed to be the \"\"sure thing\"\" that turned out to be incorrect like the Dream Team of Long-term Capital Management, the banks that were too big to fail, the Japanese taking over in the late 1980s, etc. There are more than a few times where things started looking one way and ended up quite differently though I wonder if you are aware of this performance chasing that some will do.\"",
"title": ""
},
{
"docid": "642b86f98a538677ffa13426a8d71943",
"text": "Is it POSSIBLE? Of course. I don't even need to do any research to prove that. Just some mathematical reasoning: Take the S&P 500. Find the performance of each stock in that list over whatever time period you want to use for your experiment. Now select some number of the best-performing stocks from the list -- any number less than 500. By definition, the X best must be better than or equal to the average. Assuming all the stocks on the S&P did not have EXACTLY the same performance, these 10 must be better than average. You now have a diversified portfolio that performed better than the S&P 500 index fund. Of course as they always say in a prospectus, past performance is not a guarantee of future performance. It's certainly possible to do. The question is, if YOU selected the stocks making up a diversified portfolio, would your selections do better than an index fund?",
"title": ""
},
{
"docid": "352ae26769c4ba7b9868bfb94afe8813",
"text": "\"You absolutely should consider expenses. Why do they matter when the \"\"sticker price\"\" already includes them? Because you can be much more certain about what the expense ratio will be in the future than you can about what the fund performance will be in the future. The \"\"sticker price\"\" mixes generalized economic growth (i.e., gains you could have gotten from other funds) with gains specific to the fund, but the expense ratio is completely fund-specific. In other words, when looking at the \"\"sticker price\"\" performance of a fund, it's difficult to determine how that performance will extend into the future. But the expense ratio will definitely carry into the future. It is rare for funds to drastically change their expense ratios, but common for funds to change their performance. Suppose you find a fund that has returned a net of 8% over some time period and has a 1% expense ratio, and another fund that has returned a net of 10% but has a 2% expense ratio. So the first fund returned 9%-1% = 8% and the second returned 12%-2%=10%. There are decent odds that, over some future time period, the first fund will return 10%-1%=9% while the second fund will return 10%-2%=8%. In order for the second fund to be better than the first, it has to reliably outperform it by 1%; this is harder than it may sound. Simply put, there is a lot of \"\"noise\"\" in the fund performance, but the expense ratio is \"\"all signal\"\". Of course, if you find a fund that will reliably return 20% after expenses of 3%, it would probably make sense to choose that over one that returns 10% after expenses of 1%. But \"\"will reliably return\"\" is not the same as \"\"has returned over the past N years\"\", and the difference between the two phrases becomes greater and greater the smaller N is. When you find a fund that seems to have performed staggeringly well over some time period, you should be cautious; there is a good chance that the future holds some regression to the mean, and the fund will not continue to be so stellar. You may want to take a look at this question which asked about Morningstar fund ratings, which are essentially a measure of past performance. My answer references a study done by Morningstar comparing its own star ratings vs. fund expenses as a predictor of overall results. I'll repeat here the take-home message: How often did it pay to heed expense ratios? Every time. How often did it pay to heed the star rating? Most of the time, with a few exceptions. How often did the star rating beat expenses as a predictor? Slightly less than half the time, taking into account funds that expired during the time period. In other words, Morningstar's own study showed that its own star ratings (that is, past fund performance) are not as good at predicting success as simply looking at the expense ratios of the funds.\"",
"title": ""
},
{
"docid": "991cef19bbf007ca750f256f14ac5d3a",
"text": "Since the vast majority of fund managers/big investors run private entities, it's not possible to track their performance. It's possible to look at what they are holding (that's never real-time information) and emulate their performance.",
"title": ""
},
{
"docid": "787e561450535d93b98cac7b6f0088e2",
"text": "This is Ellie Lan, investment analyst at Betterment. To answer your question, American investors are drawn to use the S&P 500 (SPY) as a benchmark to measure the performance of Betterment portfolios, particularly because it’s familiar and it’s the index always reported in the news. However, going all in to invest in SPY is not a good investment strategy—and even using it to compare your own diversified investments is misleading. We outline some of the pitfalls of this approach in this article: Why the S&P 500 Is a Bad Benchmark. An “algo-advisor” service like Betterment is a preferable approach and provides a number of advantages over simply investing in ETFs (SPY or others like VOO or IVV) that track the S&P 500. So, why invest with Betterment rather than in the S&P 500? Let’s first look at the issue of diversification. SPY only exposes investors to stocks in the U.S. large cap market. This may feel acceptable because of home bias, which is the tendency to invest disproportionately in domestic equities relative to foreign equities, regardless of their home country. However, investing in one geography and one asset class is riskier than global diversification because inflation risk, exchange-rate risk, and interest-rate risk will likely affect all U.S. stocks to a similar degree in the event of a U.S. downturn. In contrast, a well-diversified portfolio invests in a balance between bonds and stocks, and the ratio of bonds to stocks is dependent upon the investment horizon as well as the individual's goals. By constructing a portfolio from stock and bond ETFs across the world, Betterment reduces your portfolio’s sensitivity to swings. And the diversification goes beyond mere asset class and geography. For example, Betterment’s basket of bond ETFs have varying durations (e.g., short-term Treasuries have an effective duration of less than six months vs. U.S. corporate bonds, which have an effective duration of just more than 8 years) and credit quality. The level of diversification further helps you manage risk. Dan Egan, Betterment’s Director of Behavioral Finance and Investing, examined the increase in returns by moving from a U.S.-only portfolio to a globally diversified portfolio. On a risk-adjusted basis, the Betterment portfolio has historically outperformed a simple DIY investor portfolio by as much as 1.8% per year, attributed solely to diversification. Now, let’s assume that the investor at hand (Investor A) is a sophisticated investor who understands the importance of diversification. Additionally, let’s assume that he understands the optimal allocation for his age, risk appetite, and investment horizon. Investor A will still benefit from investing with Betterment. Automating his portfolio management with Betterment helps to insulate Investor A from the ’behavior gap,’ or the tendency for investors to sacrifice returns due to bad timing. Studies show that individual investors lose, on average, anywhere between 1.2% to 4.3% due to the behavior gap, and this gap can be as high as 6.5% for the most active investors. Compared to the average investor, Betterment customers have a behavior gap that is 1.25% lower. How? Betterment has implemented smart design to discourage market timing and short-sighted decision making. For example, Betterment’s Tax Impact Preview feature allows users to view the tax hit of a withdrawal or allocation change before a decision is made. Currently, Betterment is the only automated investment service to offer this capability. This function allows you to see a detailed estimate of the expected gains or losses broken down by short- and long-term, making it possible for investors to make better decisions about whether short-term gains should be deferred to the long-term. Now, for the sake of comparison, let’s assume that we have an even more sophisticated investor (Investor B), who understands the pitfalls of the behavior gap and is somehow able to avoid it. Betterment is still a better tool for Investor B because it offers a suite of tax-efficient features, including tax loss harvesting, smarter cost-basis accounting, municipal bonds, smart dividend reinvesting, and more. Each of these strategies can be automatically deployed inside the portfolio—Investor B need not do a thing. Each of these strategies can boost returns by lowering tax exposure. To return to your initial question—why not simply invest in the S&P 500? Investing is a long-term proposition, particularly when saving for retirement or other goals with a time horizon of several decades. To be a successful long-term investor means employing the core principles of diversification, tax management, and behavior management. While the S&P might look like a ‘hot’ investment one year, there are always reversals of fortune. The goal with long-term passive investing—the kind of investing that Betterment offers—is to help you reach your investing goals as efficiently as possible. Lastly, Betterment offers best-in-industry advice about where to save and how much to save for no fee.",
"title": ""
},
{
"docid": "88df300e6b133556974c6289f78c352f",
"text": "The only way for a mutual fund to default is if it inflated the NAV. I.e.: it reports that its investments worth more than they really are. Then, in case of a run on the fund, it may end up defaulting since it won't have the money to redeem shares at the NAV it published. When does it happen? When the fund is mismanaged or is a scam. This happened, for example, to the fund Madoff was managing. This is generally a sign of a Ponzi scheme or embezzlement. How can you ensure the funds you invest in are not affected by this? You'll have to read the fund reports, check the independent auditors' reports and check for clues. Generally, this is the job of the SEC - that's what they do as regulators. But for smaller funds, and private (i.e.: not public) investment companies, SEC may not be posing too much regulations.",
"title": ""
}
] |
fiqa
|
287c1e883e997b622e05a893d549da1a
|
Couch Potato Portfolio for Europeans?
|
[
{
"docid": "60dbfff8b0fc19a14a628170f4c6aa8d",
"text": "\"The question is asking for a European equivalent of the so-called \"\"Couch Potato\"\" portfolio. \"\"Couch Potato\"\" portfolio is defined by the two URLs provided in question as, Criteria for fund composition Fixed-income: Regardless of country or supra-national market, the fixed-income fund should have holdings throughout the entire length of the yield curve (most available maturities), as well as being a mix of government, municipal (general obligation), corporate and high-yield bonds. Equity: The common equity position should be in one equity market index fund. It shouldn't be a DAX-30 or CAC-40 or DJIA type fund. Instead, you want a combination of growth and value companies. The fund should have as many holdings as possible, while avoiding too much expense due to transaction costs. You can determine how much is too much by comparing candidate funds with those that are only investing in highly liquid, large company stocks. Why it is easier for U.S. and Canadian couch potatoes It will be easier to find two good funds, at lower cost, if one is investing in a country with sizable markets and its own currency. That's why the Couch Potato strategy lends itself most naturally to the U.S.A, Canada, Japan and probably Australia, Brazil, South Korea and possibly Mexico too. In Europe, pre-EU, any of Germany, France, Spain, Italy or the Scandinavian countries would probably have worked well. The only concern would be (possibly) higher equity transactions costs and certainly larger fixed-income buy-sell spreads, due to smaller and less liquid markets other than Germany. These costs would be experienced by the portfolio manager, and passed on to you, as the investor. For the EU couch potato Remember the criteria, especially part 2, and the intent as described by the Couch Potato name, implying extremely passive investing. You want to choose two funds offered by very stable, reputable fund management companies. You will be re-balancing every six months or a year, only. That is four transactions per year, maximum. You don't need a lot of interaction with anyone, but you DO need to have the means to quickly exit both sides of the trade, should you decide, for any reason, that you need the money or that the strategy isn't right for you. I would not choose an ETF from iShares just because it is easy to do online transactions. For many investors, that is important! Here, you don't need that convenience. Instead, you need stability and an index fund with a good reputation. You should try to choose an EU based fund manager, or one in your home country, as you'll be more likely to know who is good and who isn't. Don't use Vanguard's FTSE ETF or the equivalent, as there will probably be currency and foreign tax concerns, and possibly forex risk. The couch potato strategy requires an emphasis on low fees with high quality funds and brokers (if not buying directly from the fund). As for type of fund, it would be best to choose a fund that is invested in mostly or only EU or EEU (European Economic Union) stocks, and the same for bonds. That will help minimize your transaction costs and tax liability, while allowing for the sort of broad diversity that helps buy and hold index fund investors.\"",
"title": ""
}
] |
[
{
"docid": "1df0824cd1106c15f22942b08b7f6d3e",
"text": "Using a simple investment calculator to get a sense of scale here, to have 70k total, including the 500 a month invested, after ten years you just need returns of 2%. To earn 70k on top of the money invested you would need returns over 20%. To do that in five years you would need over 50% annual return. That is quite a big difference. Annualized returns of 20% would require high risk and a very large amount of time invested, skill and luck. 2% returns can be nearly guaranteed without much effort. I would encourage you to think about your money more holistically. If you get very unlucky with investments and don't make any money will you not go on the vacations even if your income allows? That doesn't make a lot of sense. As always, spend all your money with the current and future in mind. Investment return Euros are no different from any other Euros. At that point, the advice is the same for all investors try to get as much return as possible for the risk you are comfortable with. You seem to have a high tolerance for risk. Generally, for investors with a high risk tolerance a broadly diversified portfolio of stocks (with maybe a small amount of bonds, other investments) will give the most return over the long term for the risk taken. After that generally the next most useful way to boost your returns is to try to avoid taxes which is why we talk about 401(k)s so much around here. Each European country has different tax law, but please ask questions here about your own country as well as you mention money.se could use more ex-US questions.",
"title": ""
},
{
"docid": "6ee5094a258ae0377d39f8cdcfb21087",
"text": "\"Tricky question, basically, you just want to first spread risk around, and then seek abnormal returns after you understand what portions of your portfolio are influenced by (and understand your own investment goals) For a relevant timely example: the German stock exchange and it's equity prices are reaching all time highs, while the Greek asset prices are reaching all time lows. If you just invested in \"\"Europe\"\" your portfolio will experience only the mean, while suffering from exchange rate changes. You will likely lose because you arbitrarily invested internationally, for the sake of being international, instead of targeting a key country or sector. Just boils down to more research for you, if you want to be a passive investor you will get passive investor returns. I'm not personally familiar with funds that are good at taking care of this part for you, in the international markets.\"",
"title": ""
},
{
"docid": "66391544bf97a3858c7e540b5e958624",
"text": "To be honest, wall street survivor is good but when it comes to learning the stock markets from Europe, Beat wall street is the game to be playing. You can try it out for your self here on http://beatwallstreet.equitygameonline.com/ It is easy to use and there are monthly prizes available to winners, such as Ipads, Iphones and students who play it the game can win internships at top investment banks and brokers",
"title": ""
},
{
"docid": "5b683b5c56dadebd966fea31964fadf1",
"text": "\"One alternative to bogleheadism is the permanent portfolio concept (do NOT buy the mutual fund behind this idea as you can easily obtain access to a low cost money market fund, stock index fund, and bond fund and significantly reduce the overall cost). It doesn't have the huge booms that stock plans do, but it also doesn't have the crushing blows either. One thing some advisers mention is success is more about what you can stick to than what \"\"traditionally\"\" makes sense, as you may not be able to stick to what traditionally makes sense (all people differ). This is an excellent pro and con critique of the permanent portfolio (read the whole thing) that does highlight some of the concerns with it, especially the big one: how well will it do in a world of high interest rates? Assuming we ever see a world of high interest rates, it may not provide a great return. The authors make the assumption that interest rates will be rising in the future, thus the permanent portfolio is riskier than a traditional 60/40. As we're seeing in Europe, I think we're headed for a world of negative interest rates - something in the past most advisers have thought was very unlikely. I don't know if we'll see interest rates above 6% in my lifetime and if I live as long as my father, that's a good 60+ years ahead. (I realize people will think this is crazy to write, but consider that people are willing to pay governments money to hold their cash - that's how crazy our world is and I don't see this changing.)\"",
"title": ""
},
{
"docid": "c517ef7ba52c41d23492de2239036a19",
"text": "Investing in property hoping that it will gain value is usually foolish; real estate increases about 3% a year in the long run. Investing in property to rent is labor-intensive; you have to deal with tenants, and also have to take care of repairs. It's essentially getting a second job. I don't know what the word pension implies in Europe; in America, it's an employer-funded retirement plan separate from personally funded retirement. I'd invest in personally funded retirement well before buying real estate to rent, and diversify my money in that retirement plan widely if I was within 10-20 years of retirement.",
"title": ""
},
{
"docid": "bab7bb817344b8591a92849a473ed6a7",
"text": "I beg to differ: Israel has an incredibly well managed central bank, and the usury market is wonderfully competitive. It's a shame Stanley Fischer has retired. His management is the case study in central bank management. Rates are low because inflation is low. The nominal rate is irrelevant to return because a 2% nominal return with 1% inflation is superior to a 5% nominal return with 9% inflation. A well-funded budget is the best first step, so now a tweak is necessary: excess capital beyond budgeting should be moved quickly to internationally diversified equities after funding, discounted and adjusted, longer term budgets. Credit will not pay the rate necessary for long term investment. Higher variance is the price to pay for higher returns.",
"title": ""
},
{
"docid": "7d96ffa27caec8d874570b6eff6a9c68",
"text": "\"The portfolio described in that post has a blend of small slices of Vanguard sector funds, such as Vanguard Pacific Stock Index (VPACX). And the theory is that rebalancing across them will give you a good risk-return tradeoff. (Caveat: I haven't read the book, only the post you link to.) Similar ETFs are available from Vanguard, iShares, and State Street. If you want to replicate the GFP exactly, pick from them. (If you have questions about how to match specific funds in Australia, just ask another question.) So I think you could match it fairly exactly if you wanted to. However, I think trying to exactly replicate the Gone Fishin Portfolio in Australia would not be a good move for most people, for a few reasons: Brokerage and management fees are generally higher in Australia (smaller market), so dividing your investment across ten different securities, and rebalancing, is going to be somewhat more expensive. If you have a \"\"middle-class-sized\"\" portfolio of somewhere in the tens of thousands to low millions of dollars, you're cutting it into fairly small slices to manually allocate 5% to various sectors. To keep brokerage costs low you probably want to buy each ETF only once every one-two years or so. You also need to keep track of the tax consequences of each of them. If you are earning and spending Australian dollars, and looking at the portfolio in Australian dollars, a lot of those assets are going to move together as the Australian dollar moves, regardless of changes in the underlying assets. So there is effectively less diversification than you would have in the US. The post doesn't mention the GFP's approach to tax. I expect they do consider it, but it's not going to be directly applicable to Australia. If you are more interested in implementing the general approach of GFP rather than the specific details, what I would recommend is: The Vanguard and superannuation diversified funds have a very similar internal split to the GFP with a mix of local, first-world and emerging market shares, bonds, and property trusts. This is pretty much fire-and-forget: contribute every month and they will take care of rebalancing, spreading across asset classes, and tax calculations. By my calculations the cost is very similar, the diversification is very similar, and it's much easier. The only thing they don't generally cover is a precious metals allocation, and if you want that, just put 5% of your money into the ASX:GOLD ETF, or something similar.\"",
"title": ""
},
{
"docid": "44c1a694da5c07c973e7e50b0180cf2c",
"text": "According to your post, you bought seven shares of VBR at $119.28 each on August 23rd. You paid €711,35. Now, on August 25th, VBR is worth $120.83. So you have But you want to know what you have in EUR, not USD. So if I ask Google how much $845.81 is in EUR, it says €708,89. That's even lower than what you're seeing. It looks like USD has fallen in value relative to EUR. So while the stock price has increased in dollar terms, it has fallen in euro terms. As a result, the value that you would get in euros if you sold the stock has fallen from the price that you paid. Another way of thinking about this is that your price per share was €101,72 and is now €101,33. That's actually a small drop. When you buy and sell in a different currency that you don't actually want, you add the currency risk to your normal risk. Maybe that's what you want to do. Or maybe you would be better off sticking to euro-denominated investments. Usually you'd do dollar-denominated investments if some of your spending was in dollars. Then if the dollar goes up relative to the euro, your investment goes up with it. So you can cash out and make your purchases in dollars without adding extra money. If you make all your purchases in euros, I would normally recommend that you stick to euro-denominated investments. The underlying asset might be in the US, but your fund could still be in Europe and list in euros. That's not to say that you can't buy dollar-denominated investments with euros. Clearly you can. It's just that it adds currency risk to the other risks of the investment. Unless you deliberately want to bet that USD will rise relative to EUR, you might not want to do that. Note that USD may rise over the weekend and put you back in the black. For that matter, even if USD continues to fall relative to the EUR, the security might rise more than that. I have no opinion on the value of VBR. I don't actually know what that is, as it doesn't matter for the points I was making. I'm not saying to sell it immediately. I'm saying that you might prefer euro-denominated investments when you buy in the future. Again, unless you are taking this particular risk deliberately.",
"title": ""
},
{
"docid": "93bd1971ca0c84f2a6edc1cea926be7d",
"text": "Don't worry. The Cyprus situation could only occur because those banks were paying interest rates well above EU market rates, and the government did not tax them at all. Even the one-time 6.75% tax discussed is comparable to e.g. Germany and the Netherlands, if you average over the last 5 years. The simple solution is to just spread your money over multiple banks, with assets at each bank staying below EUR 100.000. There are more than 100 banks large enough that they'll come under ECB supervision this year; you'd be able to squirrel away over 10 million there. (Each branch of the Dutch Rabobank is insured individually, so you could even save 14 million there alone, and they're collectively AAA-rated.) Additionally, those savings will then be backed by more than 10 governments, many of which are still AAA-rated. Once you have to worry about those limits, you should really talk to an independent advisor. Investing in AAA government bonds is also pretty safe. The examples given by littleadv all involve known risky bonds. E.g. Argentina was on a credit watch, and paying 16% interest rates.",
"title": ""
},
{
"docid": "1c007d2f764ed54de2b635b1ceb950c4",
"text": "\"(Leaving aside the question of why should you try and convince him...) I don't know about a very convincing \"\"tl;dr\"\" online resource, but two books in particular convinced me that active management is generally foolish, but staying out of the markets is also foolish. They are: The Intelligent Asset Allocator: How to Build Your Portfolio to Maximize Returns and Minimize Risk by William Bernstein, and A Random Walk Down Wall Street: The Time Tested-Strategy for Successful Investing by Burton G. Malkiel Berstein's book really drives home the fact that adding some amount of a risky asset class to a portfolio can actually reduce overall portfolio risk. Some folks won a Nobel Prize for coming up with this modern portfolio theory stuff. If your friend is truly risk-averse, he can't afford not to diversify. The single asset class he's focusing on certainly has risks, most likely inflation / purchasing power risk ... and that risk that could be reduced by including some percentage of other assets to compensate, even small amounts. Perhaps the issue is one of psychology? Many people can't stomach the ups-and-downs of the stock market. Bernstein's also-excellent follow-up book, The Four Pillars of Investing: Lessons for Building a Winning Portfolio, specifically addresses psychology as one of the pillars.\"",
"title": ""
},
{
"docid": "296b7a2e96d632ad86e69f69b97d10fe",
"text": "It sounds like you are soliciting opinions a little here, so I'll go ahead and give you mine, recognizing that there's a degree of arbitrariness here. A basic portfolio consists of a few mutual funds that try to span the space of investments. My choices given your pot: I like VLTCX because regular bond index funds have way too much weight in government securities. Government bonds earn way too little. The CAPM would suggest a lot more weight in bonds and international equity. I won't put too much in bonds because...I just don't feel like it. My international allocation is artificially low because it's always a little more costly and I'm not sure how good the diversification gains are. If you are relatively risk averse, you can hold some of your money in a high-interest online bank and only put a portion in these investments. $100K isn't all that much money but the above portfolio is, I think, sufficient for most people. If I had a lot more I'd buy some REIT exposure, developing market equity, and maybe small cap. If I had a ton more (several million) I'd switch to holding individual equities instead of funds and maybe start looking at alternative investments, real estate, startups, etc.",
"title": ""
},
{
"docid": "f43694d6b791a3c2cd5acf2302cdeffa",
"text": "Investopedia does have tutorials about investments in different asset classes. Have you read them ? If you had heard of CFA, you can read their material if you can get hold of it or register for CFA. Their material is quite extensive and primarily designed for newbies. This is one helluva book and advice coming from persons who have showed and proved their tricks. And the good part is loads of advice in one single volume. And what they would suggest is probably opposite of what you would be doing in a hedge fund. And you can always trust google to fish out resources at the click of a button.",
"title": ""
},
{
"docid": "40f4b295402b38de190ba9198138eea9",
"text": "\"Currency, like gold and other commodities, isn't really much of an investment at all. It doesn't actually generate any return. Its value might fluctuate at a different rate than that of the US dollar or Euro, but that's about it. It might have a place as a very small slice of a basket of global currencies, but most US / European households don't actually need that sort of basket; it's really more of a risk-management strategy than an investment strategy and it doesn't really reflect the risks faced by an ordinary family in the US (or Europe or similar). Investments shouldn't generally be particularly \"\"exciting\"\". Generally, \"\"exciting\"\" opportunities mean that you're speculating on the market, not really investing in it. If you have a few thousand dollars you don't need and don't mind losing, you can make some good money speculating some of the time, but you can also just lose it all too. (Maybe there's a little room for excitement if you find amazing deals on ordinary investments at the very bottom of a stock market crash when decent, solid companies are on sale much cheaper than they ordinarily are.)\"",
"title": ""
},
{
"docid": "a3ead6164c50ccbd9cdb1398b9d611c2",
"text": "I don't know if this is exactly what you're looking for but Seedrs sorta fits what you're looking for. Private companies can raise money through funding rounds on Seedrs website. It wouldn't necessarily be local companies though. I've only recently found it myself so not sure if it has a uk or European slant to it. Personally I think it's a very interesting concept, private equity through crowd funding.",
"title": ""
},
{
"docid": "a487098eb5d373fc761b2f723dfdff16",
"text": "The problem is aggregating information from so many sources, countries, and economies. You are probably more aware of local laws, local tax changes, local economic performance, etc, so it makes sense that you'd be more in tune with your own country. If your intent is to be fully diversified, then buy a total world fund. A lot of hedge funds do what you are suggesting, but I think it requires either some serious math or some serious research. Note: I'm invested in emerging markets (EEM) for exactly the reason you suggest... diversification.",
"title": ""
}
] |
fiqa
|
c933a4e0aca37297a60630780ef79519
|
Is there anything I can do to prepare myself for the tax consequences of selling investments to buy a house?
|
[
{
"docid": "86cc49b09050e154d0e41b6e2828d838",
"text": "Don't let tax considerations be the main driver. That's generally a bad idea. You should keep tax in mind when making the decision, but don't let it be the main reason for an action. selling the higher priced shares (possibly at a loss even) - I think it's ok to do that, and it doesn't necessarily have to be FIFO? It is OK to do that, but consider also the term. Long term gain has much lower taxes than short term gain, and short term loss will be offsetting long term gain - means you can lose some of the potential tax benefit. any potential writeoffs related to buying a home that can offset capital gains? No, and anyway if you're buying a personal residence (a home for yourself) - there's nothing to write off (except for the mortgage interest and property taxes of course). selling other investments for a capital loss to offset this sale? Again - why sell at a loss? anything related to retirement accounts? e.g. I think I recall being able to take a loan from your retirement account in order to buy a home You can take a loan, and you can also withdraw up to 10K without a penalty (if conditions are met). Bottom line - be prepared to pay the tax on the gains, and check how much it is going to be roughly. You can apply previous year refund to the next year to mitigate the shock, you can put some money aside, and you can raise your salary withholding to make sure you're not hit with a high bill and penalties next April after you do that. As long as you keep in mind the tax bill and put aside an amount to pay it - you'll be fine. I see no reason to sell at loss or pay extra interest to someone just to reduce the nominal amount of the tax. If you're selling at loss - you're losing money. If you're selling at gain and paying tax - you're earning money, even if the earnings are reduced by the tax.",
"title": ""
},
{
"docid": "d1472c65dc003b8301b259da45632449",
"text": "Have you changed how you handle fund distributions? While it is typical to re-invest the distributions to buy additional shares, this may not make sense if you want to get a little cash to use for the home purchase. While you may already handle this, it isn't mentioned in the question. While it likely won't make a big difference, it could be a useful factor to consider, potentially if you ponder how risky is it having your down payment fluctuate in value from day to day. I'd just think it is more convenient to take the distributions in cash and that way have fewer transactions to report in the following year. Unless you have a working crystal ball, there is no way to definitively predict if the market will be up or down in exactly 2 years from now. Thus, I suggest taking the distributions in cash and investing in something much lower risk like a money market mutual fund.",
"title": ""
},
{
"docid": "a9dbe7f5f0b136736a208fcb32b3c391",
"text": "\"If you need less than $125k for the downpayment, I recommend you convert your mutual fund shares to their ETF counterparts tax-free: Can I convert conventional Vanguard mutual fund shares to Vanguard ETFs? Shareholders of Vanguard stock index funds that offer Vanguard ETFs may convert their conventional shares to Vanguard ETFs of the same fund. This conversion is generally tax-free, although some brokerage firms may be unable to convert fractional shares, which could result in a modest taxable gain. (Four of our bond ETFs—Total Bond Market, Short-Term Bond, Intermediate-Term Bond, and Long-Term Bond—do not allow the conversion of bond index fund shares to bond ETF shares of the same fund; the other eight Vanguard bond ETFs allow conversions.) There is no fee for Vanguard Brokerage clients to convert conventional shares to Vanguard ETFs of the same fund. Other brokerage providers may charge a fee for this service. For more information, contact your brokerage firm, or call 866-499-8473. Once you convert from conventional shares to Vanguard ETFs, you cannot convert back to conventional shares. Also, conventional shares held through a 401(k) account cannot be converted to Vanguard ETFs. https://personal.vanguard.com/us/content/Funds/FundsVIPERWhatAreVIPERSharesJSP.jsp Withdraw the money you need as a margin loan, buy the house, get a second mortgage of $125k, take the proceeds from the second mortgage and pay back the margin loan. Even if you have short term credit funds, it'd still be wiser to lever up the house completely as long as you're not overpaying or in a bubble area, considering your ample personal investments and the combined rate of return of the house and the funds exceeding the mortgage interest rate. Also, mortgage interest is tax deductible while margin interest isn't, pushing the net return even higher. $125k Generally, I recommend this figure to you because the biggest S&P collapse since the recession took off about 50% from the top. If you borrow $125k on margin, and the total value of the funds drop 50%, you shouldn't suffer margin calls. I assumed that you were more or less invested in the S&P on average (as most modern \"\"asset allocations\"\" basically recommend a back-door S&P as a mix of credit assets, managed futures, and small caps average the S&P). Second mortgage Yes, you will have two loans that you're paying interest on. You've traded having less invested in securities & a capital gains tax bill for more liabilities, interest payments, interest deductions, more invested in securities, a higher combined rate of return. If you have $500k set aside in securities and want $500k in real estate, this is more than safe for you as you will most likely have a combined rate of return of ~5% on $500k with interest on $500k at ~3.5%. If you're in small cap value, you'll probably be grossing ~15% on $500k. You definitely need to secure your labor income with supplementary insurance. Start a new question if you need a model for that. Secure real estate with securities A local bank would be more likely to do this than a major one, but if you secure the house with the investment account with special provisions like giving them copies of your monthly statements, etc, you might even get a lower rate on your mortgage considering how over-secured the loan would be. You might even be able to wrap it up without a down payment in one loan if it's still legal. Mortgage regulations have changed a lot since the housing crash.\"",
"title": ""
}
] |
[
{
"docid": "c8a32bd41ce337dbffc94eb86141d43a",
"text": "In response to one of the comments you might be interested in owning the new home as a rental property for a year. You could flip this thinking and make the current home into a rental property for a period of time (1 year seems to be the consensus, consult an accountant familiar with real estate). This will potentially allow for a 1031 exchange into another property -- although I believe that property can't then be a primary residence. All potentially not worth the complication for the tax savings, but figured I'd throw it out there. Also, the 1031 exchange defers taxes until some point in the future in which you finally sell the asset(s) for cash.",
"title": ""
},
{
"docid": "e45082ebd31646e9466456f04258ad79",
"text": "\"Please declare everything you earn in India as well as the total amount of assets (it's called FBAR). The penalties for not declaring is jail time no matter how small the amount (and lots of ordinary people every 2-3 years are regularly sent to jail for not declaring such income). It's taken very seriously by the IRS - and any Indian bank who has an office in the US or does business here, can be asked by IRS to provide any bank account details for you. You will get deductions for taxes already paid to a foreign country due to double taxation, so there won't be any additional taxes because income taxes in US are on par or even lower than that in India. Using tricks (like transferring ownership to your brother) may not be worth it. Note: you pay taxes only when you realize gains anyway - both in India or here, so why do you want to take such hassles. If you transfer to your brother, it will be taxed only until you hold them. Make sure you have exact dates of gains between the date you came to US and the date you \"\"gifted\"\" to your brother. As long as you clearly document that the stocks transferred to your brother was a gift and you have no more claims on them, it should be ok, but best to consult a CPA in the US. If you have claims on them, example agreement that you will repurchase them, then you will still continue to pay taxes. If you sell your real estate investments in India, you have to pay tax on the gains in the US (and you need proof of the original buying cost and your sale). If you have paid taxes on the real estate gains in India, then you can get deduction due to double tax avoidance treaty. No issues in bringing over the capital from India to US.\"",
"title": ""
},
{
"docid": "d129de5049e0ce307a46337a8462b5c2",
"text": "To your first question: YES. Capital gains and losses on real-estate are treated differently than income. Note here for exact IRS standards. The IRS will not care about percentage change but historical (recorded) amounts. To your second question: NO Are you taxed when buying a new stock? No. But be sure to record the price paid for the house. Note here for more questions. *Always consult a CPA for tax advice on federal tax returns.",
"title": ""
},
{
"docid": "985b9e21c615e3610c4f8c94212c7da3",
"text": "You can withdraw the contributions you made to Roth IRA tax free. Any withdrawals from Roth IRA count first towards the contributions, then conversions, and only then towards the gains which are taxable. You can also withdraw up to $10000 of the taxable portion penalty free (from either the Traditional IRA or the Roth IRA, or the combination of both) if it is applied towards the purchase of your first primary residence (i.e.: you don't own a place yet, and you're buying your first home, which will become your primary residence). That said, however, I cannot see how you can buy a $250K house. You didn't say anything about your income, but just the cash needed for the down-payment will essentially leave you naked and broke. Consider what happens if you have an emergency, out of a job for a couple of months, or something else of that kind. It is generally advised to have enough cash liquid savings to keep you afloat for at least half a year (including mortgage payments, necessities and whatever expenses you need to spend to get back on track - job searching, medical, moving, etc). It doesn't look like you're anywhere near that. Remember, many bankruptcies are happening because of the cash-flow problem, not the actual ability to repay debts on the long run.",
"title": ""
},
{
"docid": "b3371f553b12a1b7800b33aa60fbd97b",
"text": "Yes (most likely). If you are exchanging investments for cash, you will have to pay tax on that - disregarding capital losses, capital loss carryovers, AGI thresholds, and other special rules (which there is no indication of in your question). You will have to calculate the gain on Schedule D, and report that as income on your 1040. This is the case whether you buy different or same stocks.",
"title": ""
},
{
"docid": "c470b81e98a85a192222aefeb2d08363",
"text": "yes. you can take out 500,000 form your paid of house. you pay back 500,000 at 3.5. percent. you do get a tax break for not owning your house. it is less then 3.5 you are paying back the back. about one forth of that, BUT you take the 500,000 in invest. Now cd low 1 percent, stock is risky. You can do REIT, with are about 8 to 12 every year. so even at 8 - tax 1.5 is 6.5 - 3.5 bank loan. that 3 percent on your 500,000 thousand, plus tax break, but that only at 8 percent. or 500,000 and buy a apartment building, again about 7 to 10 percent, so that 2 to 3 percent profit, but the building goes up over years.",
"title": ""
},
{
"docid": "8394b41dc5e16d17c616139c687e014c",
"text": "If it is US, you need to take tax implications into account. Profit taken from sale of your home is taxable. One approach would be to take the tax hit, pay down the student loans, rent, and focus any extra that you can on paying off the student loans quickly. The tax is on realized gains when you sell the property. I think that any equity under the original purchase price is taxed at a lower rate (or zero). Consult a tax pro in your area. Do not blindly assume buying is better than renting. Run the numbers. Rent Vs buy is not a question with a single answer. It depends greatly on the real estate market where you are, and to a lesser extent on your personal situation. Be sure to include maintenance and HOA fees, if any, on the ownership side. Breakeven time on a new roof or a new HVAC unit or an HOA assessment can be years, tipping the scales towards renting. Include the opportunity cost by including the rate of return on the 100k on the renting side (or subtracting it on the ownership side). Be sure to include the tax implications on the ownership side, especially taxes on any profits from the sale. If the numbers say ownership in your area is better, then try for as small of a mortgage as you can get in a growing area. Assuming that the numbers add up to buying: buy small and live frugally, focus on increasing discretionary spending, and using it to pay down debt and then build wealth. If they add up to renting, same thing but rent small.",
"title": ""
},
{
"docid": "76dbbced33adaccadc525e0a0ba9e288",
"text": "\"The ultimate purpose of Case-Schiller is to build contracts that you can use to stop worrying about this, for a price. You or your lender might buy cash settled put options based on the index, and hope that if your home falls in value, the your options become \"\"in the money\"\" to make up the shortfall. The major problem that I can see with this is finding people to take the other side of that contract. Renters would be the primary candidates, but Americans are on average so overweight in real estate that there really isn't anyone underexposed to real estate who would benefit from diversification, and the tax advantage will give people far cheaper avenues address this. Viewed in this light, your question has a sort of obvious answer: Case-Schiller is historical data, and you need to know about the future historical data. Case-Schiller can't do it alone, but you can use futures markets to predict it. Problem you'll have is that the market itself will optimize this temporal trade: if there's a market drop anticipated, the market will charge you more for market drop insurance.\"",
"title": ""
},
{
"docid": "04cfc11786b1d6c8709679a6c244060f",
"text": "Assuming that you have capital gains, you can expect to have to pay taxes on them. It might be short term, or long term capital gains. If you specify exactly which shares to sell, it is possible to sell mostly losers, thus reducing or eliminating capital gains. There are separate rules for 401K and other retirement programs regarding down payments for a house. This leads to many other issues such as the hit your retirement will take.",
"title": ""
},
{
"docid": "b240bf3f322d93678d50fc93a1738b58",
"text": "Capital losses from the sale of stocks can be used to offset capital gains from the sale of a house, assuming that house was a rental property the whole time. If it was your principal residence, the capital gains are not taxed. If you used it as both a rental and a principal residence, then it gets more complicated: http://www.cra-arc.gc.ca/tx/ndvdls/tpcs/ncm-tx/rtrn/cmpltng/rprtng-ncm/lns101-170/127/rsdnc/menu-eng.html",
"title": ""
},
{
"docid": "7581bf8f1cb7cf427aacac0e7886d54e",
"text": "\"This answer is based on Australian tax, which is significantly different. I only offer it in case others want to compare situations. In Australia, a popular tax reduction technique is \"\"Negative Gearing\"\". Borrow from a bank, buy an investment property. If the income frome the new property is not enough to cover interest payments (plus maintenance etc) then the excess each year is a capital loss - which you claim each year, as an offset to your income (ie. pay less tax). By the time you reach retirement, the idea is to have paid off the mortgage. You then live off the revenue stream in retirement, or sell the property for a (taxed) lump sum.\"",
"title": ""
},
{
"docid": "b53f7aa9e406ea773a4b45621660c971",
"text": "Your first home can be up to £450,000 today. But that figure is unlikely to stay the same over 40 years. The government would need to raise it in line with inflation otherwise in 40 years you won't be able to buy quite so much with it. If inflation averages 2% over your 40 year investment period say, £450,000 would buy you roughly what £200,000 would today. Higher rates of inflation will reduce your purchasing power even faster. You pay stamp duty on a house. For a house worth £450,000 that would be around £12,500. There are also estate agent's fees (typically 1-2% of the purchase price, although you might be able to do better) and legal fees. If you sell quickly you'd only be able to access the balance of the money less all those taxes and fees. That's quite a bit of your bonus lost so why did you tie your money up in a LISA for all those years instead of investing in the stock market directly? One other thing to note is that you buy a LISA from your post tax income. You pay into a pension using your pre-tax income so if you're investing for your retirement then a pension will start with a 20% bonus if you're a lower rate taxpayer and a whopping 40% bonus if you're a higher rate taxpayer. If you're a higher rate taxpayer a pension is much better value.",
"title": ""
},
{
"docid": "c8e90732e325599af6175216e695a35f",
"text": "It would be better for you to sell yourself and pay capital gains tax than to transfer to your parents and pay the gift tax. Also, sham transfer (you transfer to your mother only so that she could sell and transfer back to you without you paying taxes) will be probably categorized as tax evasion, which is a criminal offense that could lead to your deportation. What the US should or should not claim you can take to your congressman, but the fact is that the US does claim tax on capital gains even if you bought the asset before becoming US tax resident, and that's the current law.",
"title": ""
},
{
"docid": "18709a398b2b7066a205463a07181a42",
"text": "There's a couple issues to consider: When you sell your primary home, the IRS gives you a $500k exemption (married, filing jointly) on gain. If you decide not to sell your current house now, and you subsequently fall outside the ownership/use tests, then you may owe taxes on any gains when you sell the house. Rather than being concerned about your net debt, you should be concerned about your monthly debt payments. Generally speaking, you cannot have debt payments of more than 36% of your monthly income. If you can secure a renter for your current property, then you may be able to reach this ratio for your next (third) property. Also, only 75% of your expected monthly rental income is considered for calculating your 36% number. (This is not an exhaustive list of risks you expose yourself to). The largest risk is if you or your spouse find yourself without income (e.g. lost job, accident/injury, no renter), then you may be hurting to make your monthly debt payments. You will need to be confident that you can pay all your debts. A good rule that I hear is having the ability to pay 6 months worth of debt. This may not necessarily mean having 6 months worth of cash on hand, but access to that money through personal lines of credit, borrowing against assets, selling stocks/investments, etc. You also want to make sure that your insurance policies fully cover you in the event that a tenant sues you, damages property, etc. You also don't want to face a situation where you are sued because of discrimination. Hiring a property management company to take care of these things may be a good peace-of-mind.",
"title": ""
},
{
"docid": "aae1e5e2604e7ef112e1abe3a414971f",
"text": "IBAN is enough within SEPA and it should be so for your bank as well. Tell them to join our decade, or change bank. I received bank transfers from other continents to my SEPA account in the past and I don't remember ever needing to say more than my IBAN and BIC. Banks can ask all sorts of useless information, but if your bank doesn't have a standard (online) form for the operation then it probably means you're going to spend a lot.",
"title": ""
}
] |
fiqa
|
e8bbcf91ae40d415a585146e1ca3f8db
|
How can I profit on the Chinese Real-Estate Bubble?
|
[
{
"docid": "7f827721412df38aabe25fe0136f47c0",
"text": "\"Perhaps buying some internationally exchanged stock of China real-estate companies? It's never too late to enter a bubble or profit from a bubble after it bursts. As a native Chinese, my observations suggest that the bubble may exist in a few of the most populated cities of China such as Beijing, Shanghai and Shenzhen, the price doesn't seem to be much higher than expected in cities further within the mainland, such as Xi'an and Chengdu. I myself is living in Xi'an. I did a post about the urban housing cost of Xi'an at the end of last year: http://www.xianhotels.info/urban-housing-cost-of-xian-china~15 It may give you a rough idea of the pricing level. The average of 5,500 CNY per square meter (condo) hasn't fluctuated much since the posting of the entry. But you need to pay about 1,000 to 3,000 higher to get something desirable. For location, just search \"\"Xi'an, China\"\" in Google Maps. =========== I actually have no idea how you, a foreigner can safely and easily profit from this. I'll just share what I know. It's really hard to financially enter China. To prevent oversea speculative funds from freely entering and leaving China, the Admin of Forex (safe.gov.cn) has laid down a range of rigid policies regarding currency exchange. By law, any native individual, such as me, is imposed of a maximum of $50,000 that can be converted from USD to CNY or the other way around per year AND a maximum of $10,000 per day. Larger chunks of exchange must get the written consent of the Admin of Forex or it will simply not be cleared by any of the banks in China, even HSBC that's not owned by China. However, you can circumvent this limit by using the social ID of your immediate relatives when submitting exchange requests. It takes extra time and effort but viable. However, things may change drastically should China be in a forex crisis or simply war. You may not be able to withdraw USD at all from the banks in China, even with a positive balance that's your own money. My whole income stream are USD which is wired monthly from US to Bank of China. I purchased a property in the middle of last year that's worth 275,000 CNY using the funds I exchanged from USD I had earned. It's a 43.7% down payment on a mortgage loan of 20 years: http://www.mlcalc.com/#mortgage-275000-43.7-20-4.284-0-0-0.52-7-2009-year (in CNY, not USD) The current household loan rate is 6.12% across the entire China. However, because this is my first property, it is discounted by 30% to 4.284% to encourage the first house purchase. There will be no more discounts of loan rate for the 2nd property and so forth to discourage speculative stocking that drives the price high. The apartment I bought in July of 2009 can easily be sold at 300,000 now. Some of the earlier buyers have enjoyed much more appreciation than I do. To give you a rough idea, a house bought in 2006 is now evaluated 100% more, one bought in 2008 now 50% more and one bought in the beginning of 2009 now 25% more.\"",
"title": ""
},
{
"docid": "0432509d0463cf57dfe90785b82f0d78",
"text": "Create, market and perform seminars advising others how to get rich from the Chinese Real-Estate Bubble. Much more likely to be profitable; and you can do it from the comfort of your own country, without currency conversions.",
"title": ""
}
] |
[
{
"docid": "133154f62f8331a8df866bfc4aab2f0b",
"text": "\"The trade-off seems to be quite simple: \"\"How much are you going to get if you sell it\"\" against \"\"How much are you going to get if you rent it out\"\". Several people already hinted that the rental revenue may be optimistic, I don't have anything to add to this, but keep in mind that if someone pays 45k for your apartment, the net gains for you will likely be lower as well. Another consideration would be that the value of your apartment can change, if you expect it to rise steadily you may want to think twice before selling. Now, assuming you have calculated your numbers properly, and a near 0% opportunity cost: 45,000 right now 3,200 per year The given numbers imply a return on investment of 14 years, or 7.1%. Personal conclusion: I would be surprised if you can actually get a 3.2k expected net profit for an apartment that rents out at 6k per year, but if you are confident the reward seems to be quite nice.\"",
"title": ""
},
{
"docid": "2827cf778c230ac62baa016936a44c42",
"text": "Serious answer: If 7 banks owned the vast majority of houses for sale -- that is, on their balance sheet, at the peak of the housing bubble -- there would be. These 7 LCD companies produced the majority of LCDs globally. Real estate is far more decentralized, and in many times the bank merely provided financing for a third-party sale (from the builder, from any one of a hundred or so real estate companies, etc). (But I am assuming you probably weren't looking for a factual answer, anyway.)",
"title": ""
},
{
"docid": "5af78f8ae516b739e9b1687d9f881c08",
"text": "The right time to buy real estate is easy to spot. It's when it is difficult to get loans or when real estate agents selling homes are tripping over each other. It's the wrong time to buy when houses are sold within hours of the sign going up. The way to profit from equities over time is to dollar-cost average a diversified portfolio over time, while keeping cash reserves of 5-15% around. When major corrections strike, buy a little extra. You can make money at trading. But it requires that you exert a consistent effort and stay up to date on your investments and future prospects.",
"title": ""
},
{
"docid": "1d3076b1b2a9e936b239cfe2cddfc971",
"text": "It is worth noting first that Real Estate is by no means passive income. The amount of effort and cost involved (maintenance, legal, advertising, insurance, finding the properties, ect.) can be staggering and require a good amount of specialized knowledge to do well. The amount you would have to pay a management company to do the work for you especially with only a few properties can wipe out much of the income while you keep the risk. However, keshlam's answer still applies pretty well in this case but with a lot more variability. One million dollars worth of property should get you there on average less if you do much of the work yourself. However, real estate because it is so local and done in ~100k chunks is a lot more variable than passive stocks and bonds, for instance, as you can get really lucky or really unlucky with location, the local economy, natural disasters, tenants... Taking out loans to get you to the million worth of property faster but can add a lot more risk to the process. Including the risk you wouldn't have any money on retirement. Investing in Real Estate can be a faster way to retirement than some, but it is more risky than many and definitely not passive.",
"title": ""
},
{
"docid": "5bf3487c2e9cffeaedd48bd6196fafaa",
"text": "\"China's regulators, it seems, are on the attack. Guo Shuqing, chairman of the China Banking Regulatory Commission, announced recently that he'd resign if he wasn't able to discipline the banking system. Under his leadership, the CBRC is stepping up scrutiny of the role of trust companies and other financial institutions in helping China's banks circumvent lending restrictions. The People's Bank of China has also been on the offensive. It has recently raised the cost of liquidity, attacked riskier funding structures among smaller banks, and discontinued a program that effectively monetized one-fifth of last year's increase in lending. Are the regulators finally getting serious about reining in credit creation? The answer is an easy one: Yes if they're willing to allow economic growth to slow substantially, probably to 3 percent or less, and no if they aren't. inRead invented by Teads This is because there's a big difference between China's sustainable growth rate, based on rising demand driven by household consumption and productive investment, and its actual GDP growth rate, which is boosted by massive lending to fund investment projects that are driven by the need to generate economic activity and employment. Economists find it very difficult to formally acknowledge the difference between the two rates, and many don't even seem to recognize that it exists. Yet this only shows how confused economists are about gross domestic product more generally. The confusion arises because a country's GDP is not a measure of the value of goods and services it creates but rather a measure of economic activity. In a market economy, investment must create enough additional productive capacity to justify the expenditure. If it doesn't, it must be written down to its true economic value. This is why GDP is a reasonable proxy in a market economy for the value of goods and services produced. But in a command economy, investment can be driven by factors other than the need to increase productivity, such as boosting employment or local tax revenue. What's more, loss-making investments can be carried for decades before they're amortized, and insolvency can be ignored. This means that GDP growth can overstate value creation for decades. That's what has happened in China. In the first quarter of 2017, China added debt equal to more than 40 percentage points of GDP -- an amount that has been growing year after year. In 2011, the World Economic Forum predicted that China's debt would increase by a worrying $20 trillion by 2020. By 2016, it had already increased by $22 trillion, according to the most conservative estimates, and at current rates it will increase by as much as $50 trillion by 2020. These numbers probably understate the reality. If all this debt hasn't boosted China's GDP growth to substantially more than its potential growth rate, then what was the point? And why has it proven so difficult for the government to rein it in? It has promised to do so since 2012, yet credit growth has only accelerated, reaching some of the highest levels ever seen for a developing country. The answer is that credit creation had to accelerate to boost GDP growth above the growth rate of productive capacity. Much, if not most, of China's 6.5 percent GDP growth is simply an artificial boost in economic activity with no commensurate increase in the capacity to create goods and services. It must be fully reversed once credit stops growing. To make matters worse, if high debt levels generate financial distress costs for the economy -- as already seems to be happening -- the amount that must be reversed will substantially exceed the original boost. Once credit is under control, China will have lower but healthier GDP growth rates. If the economy rebalances, most Chinese might not even notice: It would require that household income -- which has grown much more slowly than GDP for nearly three decades -- now grow faster, so that the sharp slowdown in economic growth won't be matched by an equivalent slowdown in wage growth. Clear thinking from leading voices in business, economics, politics, foreign affairs, culture, and more. Share the View Enter your email Sign Up But to manage this rebalancing requires substantial transfers of wealth from local governments to ordinary households to boost consumption. This is why China hasn't been able to control credit growth in the past. The central government has had to fight off provincial \"\"vested interests,\"\" who oppose any substantial transfer of wealth. Without these transfers, slower GDP growth would mean higher unemployment. Whether regulators can succeed in reining in credit creation this time is ultimately a political question, and depends on the central government's ability to force through necessary reforms. Until then, as long as China has the debt capacity, GDP growth rates will remain high. But to see that as healthy is to miss the point completely.\"",
"title": ""
},
{
"docid": "cdc8ee4b63ae9ac426fd4dad8942a239",
"text": "Huh, well it's working for me. I've got 3 properties and am a little over 25% of my goal to never work again. How would you suggest one get rich? I assume you have a better plan than he does?",
"title": ""
},
{
"docid": "cd0b25899dfe8a0d7965310d6cfc769b",
"text": "Playing the markets is simple...always look for the sucker in the room and outsmart him. Of course if you can't tell who that sucker is it's probably you. If the strategy you described could make you rich, cnbc staff would all be billionaires. There are no shortcuts, do your research and decide on a strategy then stick to it in all weather or until you find a better one.",
"title": ""
},
{
"docid": "e0b589d58e89dc2487eaf6e429674240",
"text": "\"Americans are snapping, like crazy. And not only Americans, I know a lot of people from out of country are snapping as well, similarly to your Australian friend. The market is crazy hot. I'm not familiar with Cleveland, but I am familiar with Phoenix - the prices are up at least 20-30% from what they were a couple of years ago, and the trend is not changing. However, these are not something \"\"everyone\"\" can buy. It is very hard to get these properties financed. I found it impossible (as mentioned, I bought in Phoenix). That means you have to pay cash. Not everyone has tens or hundreds of thousands of dollars in cash available for a real estate investment. For many Americans, 30-60K needed to buy a property in these markets is an amount they cannot afford to invest, even if they have it at hand. Also, keep in mind that investing in rental property requires being able to support it - pay taxes and expenses even if it is not rented, pay to property managers, utility bills, gardeners and plumbers, insurance and property taxes - all these can amount to quite a lot. So its not just the initial investment. Many times \"\"advertised\"\" rents are not the actual rents paid. If he indeed has it rented at $900 - then its good. But if he was told \"\"hey, buy it and you'll be able to rent it out at $900\"\" - wouldn't count on that. I know many foreigners who fell in these traps. Do your market research and see what the costs are at these neighborhoods. Keep in mind, that these are distressed neighborhoods, with a lot of foreclosed houses and a lot of unemployment. It is likely that there are houses empty as people are moving out being out of job. It may be tough to find a renter, and the renters you find may not be able to pay the rent. But all that said - yes, those who can - are snapping.\"",
"title": ""
},
{
"docid": "1695261b4ee40cb33966686a30309dac",
"text": "Well, Taking a short position directly in real estate is impossible because it's not a fungible asset, so the only way to do it is to trade in its derivatives - Investment Fund Stock, indexes and commodities correlated to the real estate market (for example, materials related to construction). It's hard to find those because real estate funds usually don't issue securities and rely on investment made directly with them. Another factor should be that those who actually do have issued securities aren't usually popular enough for dealers and Market Makers to invest in it, who make it possible to take a short position in exchange for some spread. So what you can do is, you can go through all the existing real estate funds and find out if any of them has a broker that let's you short it, in other words which one of them has securities in the financial market you can buy or sell. One other option is looking for real estate/property derivatives, like this particular example. Personally, I would try to computationally find other securities that may in some way correlate with the real estate market, even if they look a bit far fetched to be related like commodities and stock from companies in construction and real estate management, etc. and trade those because these have in most of the cases more liquidity. Hope this answers your question!",
"title": ""
},
{
"docid": "99c930926902e10d8b135a90ddfbcc9a",
"text": "THANK YOU so much! That is exactly what I was looking for. Unfortunately I'm goign to be really busy for 7 days but I'd love to tear through some of this material and ask you some questions if you don't mind. What do you do for a living now? Still in real estate? Did you go toward the brokerage side or are you still consulting? What's the atmosphere/day-to-day like?",
"title": ""
},
{
"docid": "51efd4c92fe5580c043a1793767c9e62",
"text": "No, there is no linkage to the value of real estate and inflation. In most of the United States if you bought at the peak of the market in 2006, you still haven't recovered 7+ years later. Also real estate has a strong local component to the price. Pick the wrong location or the wrong type of real estate and the value of your real estate will be dropping while everybody else sees their values rising. Three properties I have owned near Washington DC, have had three different price patterns since the late 80's. Each had a different starting and ending point for the peak price rise. You can get lucky and make a lot, but there is no way to guarantee that prices will rise at all during the period you will own it.",
"title": ""
},
{
"docid": "001ad7f8030aa55b992aab75c2bd3b7d",
"text": "This is one way in which the scheme could work: You put your own property (home, car, piece of land) as a collateral and get a loan from a bank. You can also try to use the purchased property as security, but it may be difficult to get 100% loan-to-value. You use the money to buy a property that you expect will rise in value and/or provide rent income that is larger than the mortgage payment. Doing some renovations can help the value rise. You sell the property, pay back the loan and get the profits. If you are fast, you might be able to do this even before the first mortgage payment is due. So yeah, $0 of your own cash invested. But if the property doesn't rise in value, you may end up losing the collateral.",
"title": ""
},
{
"docid": "3fe13b33eb0c57418a1a75e14034bc8e",
"text": "I know there are a lot of papers on bubbles already, but I was always interested in how many were retail/individually driven vs institutionally driven bubbles - at least who plays a larger role. The American pension crisis is also another interesting topic that may be fun to write about. With everyone calling doomsday on them, maybe you can shed some light on some (if any) of the bs or touch on what options they may actually have to survive. Topics on investor behaviour is always a safe bet - potential returns lost due to home bias, investment behaviour of millennials, bla bla bla. The dangers of the rise of passive investments (ETFs) if any & if it actually generates more room to capture alpha since there may be greater inefficiencies. Also the impact of a stock's return relative to the amount of ETFs it is a constituent of So many things - pls advise what you end up deciding to choose and post the paper when the time comes!",
"title": ""
},
{
"docid": "1372eca98843f33d82d53e28b69a5f0b",
"text": "\"No, it can really not. Look at Detroit, which has lost a million residents over the past few decades. There is plenty of real estate which will not go for anything like it was sold. Other markets are very risky, like Florida, where speculators drive too much of the price for it to be stable. You have to be sure to buy on the downturn. A lot of price drops in real estate are masked because sellers just don't sell, so you don't really know how low the price is if you absolutely had to sell. In general, in most of America, anyway, you can expect Real Estate to keep up with inflation, but not do much better than that. It is the rental income or the leverage (if you buy with a mortgage) that makes most of the returns. In urban markets that are getting an influx of people and industry, however, Real Estate can indeed outpace inflation, but the number of markets that do this are rare. Also, if you look at it strictly as an investment (as opposed to the question of \"\"Is it worth it to own my own home?\"\") there are a lot of additional costs that you have to recoup, from property taxes to bills, rental headaches etc. It's an investment like any other, and should be approached with the same due diligence.\"",
"title": ""
},
{
"docid": "9183b4c1428a12698926e2e6ad9e4e91",
"text": "A possibility could be real estate brokerage firms such as Realogy or Prudential. Although a brokerage commission is linked to the sale prices it is more directly impacted by sales volume. If volume is maintained or goes up a real estate brokerage firm can actually profit rather handsomely in an up market or a down market. If sales volume does go up another option would be other service markets for real estate such as real estate information and marketing websites and sources i.e. http://www.trulia.com. Furthermore one can go and make a broad generalization such as since real estate no longer requires the same quantity of construction material other industries sensitive to the price of those commodities should technically have a lower cost of doing business. But be careful in the US much of the wealth an average american has is in their home. In this case this means that the economy as a whole takes a dive due to consumer uncertainty. In which case safe havens could benefit, may be things like Proctor & Gamble, gold, or treasuries. Side Note: You can always short builders or someone who loses if the housing market declines, this will make your investment higher as a result of the security going lower.",
"title": ""
}
] |
fiqa
|
fde84e2a1edea27e98b0863692548d47
|
Is the Yale/Swenson Asset Allocation Too Conservative for a 20 Something?
|
[
{
"docid": "1034f141e13d0ab627501a394187997c",
"text": "You can look the Vanguard funds up on their website and view a risk factor provided by Vanguard on a scale of 1 to 5. Short term bond funds tend to get their lowest risk factor, long term bond funds and blended investments go up to about 3, some stock mutual funds are 4 and some are 5. Note that in 2008 Swenson himself had slightly different target percentages out here that break out the international stocks into emerging versus developed markets. So the average risk of this portfolio is 3.65 out of 5. My guess would be that a typical twenty-something who expects to retire no earlier than 60 could take more risk, but I don't know your personal goals or circumstances. If you are looking to maximize return for a level of risk, look into Modern Portfolio Theory and the work of economist Harry Markowitz, who did extensive work on the topic of maximizing the return given a set risk tolerance. More info on my question here. This question provides some great book resources for learning as well. You can also check out a great comparison and contrast of different portfolio allocations here.",
"title": ""
},
{
"docid": "d109090ba05e855c9985aee6d8e11fed",
"text": "\"I don't think the advice to take lots more risk when young makes so much sense. The additional returns from loading up on stocks are overblown; and the rocky road from owning 75-100% stocks will almost certainly mess you up and make you lose money. Everyone thinks they're different, but none of us are. One big advantage of stocks over bonds is tax efficiency only if you buy index funds and don't ever sell them. But this does not matter in a retirement account, and outside a retirement account you can use tax-exempt bonds. Stocks have higher returns in theory but to have a reasonable guarantee of higher returns from them, you need around a 30-year horizon. That is a long, long time. Psychologically, a 60/40 stocks/bonds portfolio, or something with similar risk mixing in a few more alternative assets like Swenson's, is SO MUCH better. With 100% stocks you can spend 10 or 15 years saving money and your investment returns may get you nowhere. Think what that does to your motivation to save. (And how much you save is way more important than what you invest in.) The same doesn't happen with a balanced portfolio. With a balanced portfolio you get reasonably steady progress. You can still have a down year, but you're a lot less likely to have a down decade or even a down few years. You save steadily and your balance goes up fairly steadily. The way humans really work, this is so important. For the same kind of reason, I think it's great to buy one fund that has both stocks and bonds in there. This forces you to view the thing as a whole instead of wrongly looking at the individual asset class \"\"buckets.\"\" And it also means rebalancing will happen automatically, without having to remember to do it, which you won't. Or if you remember you won't do it when you should, because stocks are doing so well, or some other rationalization. Speaking of rebalancing, that's where a lot of the steady, predictable returns come from if you have a nice balanced portfolio. You can make money over time even if both asset classes end up going nowhere, as long as they bounce around somewhat independently, so you'll buy low and sell high when you rebalance. To me the ideal is an all-in-one fund that aims for about 60/40 stocks/bonds level of risk, somewhat more diversified than stocks/bonds is great (international stock, commodities, high yield, REIT, etc.). You can just buy that at age 20 and keep it until you retire. In beautiful ideal-world economic theory, buy 90% stocks when young. Real world with human brain involved: I love balanced funds. The steady gains are such a mental win. The \"\"target retirement\"\" funds are not a bad option, but if you buy the matching year for your age, I personally wish they had less in stocks. If you want to read more on the \"\"equity premium\"\" (how much more you make from owning stocks) here are a couple of posts on it from a blog I like: Update: I wrote this up more comprehensively on my blog,\"",
"title": ""
},
{
"docid": "dae84622294f488ae7fff5c11d07754a",
"text": "That looks like a portfolio designed to protect against inflation, given the big international presence, the REIT presence and TIPS bonds. Not a bad strategy, but there are a few things that I'd want to look at closely before pulling the trigger.",
"title": ""
},
{
"docid": "54d0a04493a4b5b0306b714af1d5f04c",
"text": "\"I think Swenson's insight was that the traditional recommendation of 60% stocks plus 40% bonds has two serious flaws: 1) You are exposed to way too much risk by having a portfolio that is so strongly tied to US equities (especially in the way it has historically been recommend). 2) You have too little reward by investing so much of your portfolio in bonds. If you can mix a decent number of asset classes that all have equity-like returns, and those asset classes have a low correlation with each other, then you can achieve equity-like returns without the equity-like risk. This improvement can be explicitly measured in the Sharpe ratio of you portfolio. (The Vanguard Risk Factor looks pretty squishy and lame to me.) The book the \"\"The Ivy Portfolio\"\" does a great job at covering the Swenson model and explains how to reasonably replicate it yourself using low fee ETFs.\"",
"title": ""
}
] |
[
{
"docid": "6e8e873ae7e17639e49b6536f6f2130d",
"text": "The range is fine. It's ~ 1-2X your annual income. First, and foremost - your comment on the 401(k), not knowing the fees, is a red flag to me. The difference between low cost options (say sub .25%) and the high fees (over .75%) has a huge impact to your long term savings, and on the advice I'd give regarding maximizing the deposits. At 26, you and your wife have about 20% of your income as savings. This is on the low side, in my opinion, but others suggest a year's salary by age 35 which implies you're not too far behind. Given your income, you are most likely in the 25% federal bracket. I'd like you to research your 401(k) expenses, and if they are reasonable, maximise the deposit. If your wife has no 401(k) at work, she can deposit to an IRA, pre-tax. It's wise to keep 6 months of expenses as liquid cash (or short term CDs) as an emergency fund in case of such things as a job layoff. They say to expect a month of job hunting for each $10K you make, so having even a year to find a new job isn't unheard of. One thing to consider is to simply kill the mortgage. Before suggesting this, I'd ask what your risk tolerance is? If you took $100K and put it right into the S&P, would you worry every time you heard the market was down today? Or would you happily leave it there for the next 40 years? If you prefer safety, or at least less risk, paying off the mortgage will free up the monthly payment, and let you dollar cost average into the new investments over time. You'll have the experience of seeing your money grow and learn to withstand the volatility. The car loan is a low rate, if you prefer to keep the mortgage for now, paying the car loan is still a guaranteed 3%, vs the near 0% the bank will give you.",
"title": ""
},
{
"docid": "0918254a089cca9fd94fee63324ec519",
"text": "\"Your bank's fund is not an index fund. From your link: To provide a balanced portfolio of primarily Canadian securities that produce income and capital appreciation by investing primarily in Canadian money market instruments, debt securities and common and preferred shares. This is a very broad actively managed fund. Compare this to the investment objective listed for Vanguard's VOO: Invests in stocks in the S&P 500 Index, representing 500 of the largest U.S. companies. There are loads of market indices with varying formulas that are supposed to track the performance of a market or market segment that they intend to track. The Russel 2000, The Wilshire 1000, The S&P 500, the Dow Industrial Average, there is even the SSGA Gender Diversity Index. Some body comes up with a market index. An \"\"Index Fund\"\" is simply a Mutual Fund or Exchange Traded Fund (ETF) that uses a market index formula to make it's investment decisions enabling an investor to track the performance of the index without having to buy and sell the constituent securities on their own. These \"\"index funds\"\" are able to charge lower fees because they spend $0 on research, and only make investment decisions in order to track the holdings of the index. I think 1.2% is too high, but I'm coming from the US investing world it might not be that high compared to Canadian offerings. Additionally, comparing this fund's expense ratio to the Vanguard 500 or Total Market index fund is nonsensical. Similarly, comparing the investment returns is nonsensical because one tracks the S&P 500 and one does not, nor does it seek to (as an example the #5 largest holding of the CIBC fund is a Government of Canada 2045 3.5% bond). Everyone should diversify their holdings and adjust their investment allocations as they age. As you age you should be reallocating away from highly volatile common stock and in to assets classes that are historically more stable/less volatile like national government debt and high grade corporate/local government debt. This fund is already diversified in to some debt instruments, depending on your age and other asset allocations this might not be the best place to put your money regardless of the fees. Personally, I handle my own asset allocations and I'm split between Large, Mid and Small cap low-fee index funds, and the lowest cost high grade debt funds available to me.\"",
"title": ""
},
{
"docid": "69e661b4e1154b9542f9d63bc5d62bbb",
"text": "So I did some queries on Google Scholar, and the term of art academics seem to use is target date fund. I notice divided opinions among academics on the matter. W. Pfau gave a nice set of citations of papers with which he disagrees, so I'll start with them. In 1969, Paul Sameulson published the paper Lifetime Portfolio Selection By Dynamic Stochaistic Programming, which found that there's no mathematical foundation for an age based risk tolerance. There seems to be a fundamental quibble relating to present value of future wages; if they are stable and uncorrelated with the market, one analysis suggests the optimal lifecycle investment should start at roughly 300 percent of your portfolio in stocks (via crazy borrowing). Other people point out that if your wages are correlated with stock returns, allocations to stock as low as 20 percent might be optimal. So theory isn't helping much. Perhaps with the advent of computers we can find some kind of empirical data. Robert Shiller authored a study on lifecycle funds when they were proposed for personal Social Security accounts. Lifecycle strategies fare poorly in his historical simulation: Moreover, with these life cycle portfolios, relatively little is contributed when the allocation to stocks is high, since earnings are relatively low in the younger years. Workers contribute only a little to stocks, and do not enjoy a strong effect of compounding, since the proceeds of the early investments are taken out of the stock market as time goes on. Basu and Drew follow up on that assertion with a set of lifecycle strategies and their contrarian counterparts: whereas a the lifecycle plan starts high stock exposure and trails off near retirement, the contrarian ones will invest in bonds and cash early in life and move to stocks after a few years. They show that contrarian strategies have higher average returns, even at the low 25th percentile of returns. It's only at the bottom 5 or 10 percent where this is reversed. One problem with these empirical studies is isolating the effect of the glide path from rebalancing. It could be that a simple fixed allocation works plenty fine, and that selling winners and doubling down on losers is the fundamental driver of returns. Schleef and Eisinger compare lifecycle strategy with a number of fixed asset allocation schemes in Monte Carlo simulations and conclude that a 70% equity, 30% long term corp bonds does as well as all of the lifecycle funds. Finally, the earlier W Pfau paper offers a Monte Carlo simulation similar to Schleef and Eisinger, and runs final portfolio values through a utility function designed to calculate diminishing returns to more money. This seems like a good point, as the risk of your portfolio isn't all or nothing, but your first dollar is more valuable than your millionth. Pfau finds that for some risk-aversion coefficients, lifecycles offer greater utility than portfolios with fixed allocations. And Pfau does note that applying their strategies to the historical record makes a strong recommendation for 100 percent stocks in all but 5 years from 1940-2011. So maybe the best retirement allocation is good old low cost S&P index funds!",
"title": ""
},
{
"docid": "56290eb39d292df78b8af33f4e308903",
"text": "Mostly you nailed it. It's a good question, and the points you raise are excellent and comprise good analysis. Probably the biggest drawback is if you don't agree with the asset allocation strategy. It may be too much/too little into stocks/bonds/international/cash. I am kind of in this boat. My 401K offers very little choices in funds, but offers Vanguard target funds. These tend to be a bit too conservative for my taste, so I actually put money in the 2060 target fund. If I live that long, I will be 94 in 2060. So if the target funds are a bit too aggressive for you, move down in years. If they are a bit too conservative, move up.",
"title": ""
},
{
"docid": "08b7eac4258132d5822ce91ed957babb",
"text": "I think not. I think a discussion of optimum mix is pretty independent of age. While a 20 year old may have 40 years till retirement, a 60 year old retiree has to plan for 30 years or more of spending. I'd bet that no two posters here would give the same optimum mix for a given age, why would anyone expect the Wall Street firms to come up with something better than your own gut suggests?",
"title": ""
},
{
"docid": "f0a717cb3d03349eff74c42a58816337",
"text": "The standard advice is that stocks are all over the place, and bonds are stable. Not necessarily true. Magazines have to write for the lowest common denominator reader, so sometimes the advice given is fortune-cookie like. And like mbhunter pointed out, the advertisers influence the advice. When you read about the wonders of Index funds, and see a full page ad for Vanguard or the Nasdaq SPDR fund, you need to consider the motivation behind the advice. If I were you, I would take advantage of current market conditions and take some profits. Put as much as 20% in cash. If you're going to buy bonds, look for US Government or Municipal security bond funds for about 10% of your portfolio. You're not at an age where investment income matters, you're just looking for some safety, so look for bond funds or ETFs with low durations. Low duration protects your principal value against rate swings. The Vanguard GNMA fund is a good example. $100k is a great pot of money for building wealth, but it's a job that requires you to be active, informed and engaged. Plan on spending 4-8 hours a week researching your investments and looking for new opportunities. If you can't spend that time, think about getting a professional, fee-based advisor. Always keep cash so that you can take advantage of opportunities without creating a taxable event or make a rash decision to sell something because you're excited about a new opportunity.",
"title": ""
},
{
"docid": "e807c92da46aa5593ceb19e23329ecb6",
"text": "Michigan's 529 plan offers a wide variety of investment options, ranging from a very conservative guaranteed investment option (currently earning 1.75% interest) to a variety of index-based funds, most of which are considered aggressive. You said that you are unhappy with the 5% you have earned the past year, and that you thought you should be able to get 8% elsewhere. But according to your comment, you have 30% of your money earning a fixed 1.75% rate, and another 40% of your money invested in one of the moderate balanced options (which includes both stocks and bonds). You've only got 30% invested in the more aggressive investments that you seem to be looking for. If you want to be invested more agressively (which is reasonable, since your daughter won't need this money for many years), you can select more aggressive investments inside the 529. Michigan's 529 offers you the ability to deduct up to $10,000 (if you are married filing jointly) of contributions off your Michigan state income tax each year. In addition, the earnings inside the 529 are federally tax-free if the money is spent on college education.",
"title": ""
},
{
"docid": "e256880a79a54701a562389d0a2fd2ab",
"text": "If you spent your whole life earning the same portfolio that amounts $20,000, the variance and volatility of watching your life savings drop to $10,000 overnight has a greater consequence than for someone who is young. This is why riskier portfolios aren't advised for older people closer to or within retirement age, the obvious complementary group being younger people who could lose more with lesser permanent consequence. Your high risk investment choices have nothing to do with your ability to manage other people's money, unless you fail to make a noteworthy investment return, then your high risk approach will be the death knell to your fund managing aspirations.",
"title": ""
},
{
"docid": "257b39ff066fa883fd2ac3d6524a037f",
"text": "A UTMA may or may not fit your situation. The main drawbacks to a UTMA account is that it will count against your child for financial aid (it counts as the child's asset). The second thing to consider is that taxes aren't deferred like in a 529 plan. The last problem of course is that when he turns 18 he gets control of the account and can spend the money on random junk (which may or may not be important to you). A 529 plan has a few advantages over a UTMA account. The grandparents can open the account with your son as the beneficiary and the money doesn't show up on financial aid for college (under current law which could change of course). Earnings grow tax free which will net you more total growth. You can also contribute substantially more without triggering the gift tax ~$60k. Also many states provide a state tax break for contributing to the state sponsored 529 plan. The account owner would be the grandparents so junior can't spend the money on teenage junk. The big downside to the 529 is the 10% penalty if the money isn't used for higher education. The flip side is that if the money is left for 20 years you will also have additional growth from the 20 years of tax free growth which may be a wash depending on your tax bracket and the tax rates in effect over those 20 years.",
"title": ""
},
{
"docid": "2b5b90e9340e1eadbd41a2f035e6a76b",
"text": "\"Most people advocate a passively managed, low fee mutual fund that simply aims to track a given benchmark (say S&P 500). Few funds can beat the S&P consistently, so investors are often better served finding a no load passive fund. First thing I would do is ask your benefits rep why you don't have an option to invest in a Fidelity passive index fund like Spartan 500. Ideally young people would be heavy in equities and slowly divest for less risky stuff as retirement comes closer, and rebalance the portfolio regularly when market swings put you off risk targets. Few people know how to do this and actually do so. So there are mutual funds that do it for you, for a fee. These in are called \"\"lifecycle\"\" funds (The Freedom funds here). I hesitate to recommend them because they're still fairly new. If you take a look at underlying assets, these things generally just reinvest in the broker's other funds, which themselves have expenses & fees. And there's all kinds personal situations that might lead to you place a portion with a different investment.\"",
"title": ""
},
{
"docid": "927ea2518401bc61d9560f1f7bd8e97f",
"text": "\"As others are saying, you want to be a bit wary of completely counting on a defined benefit pension plan to be fulfilling exactly the same promises during your retirement that it's making right now. But, if in fact you've \"\"won the game\"\" (for lack of a better term) and are sure you have enough to live comfortably in retirement for whatever definition of \"\"comfortably\"\" you choose, there are basically two reasonable approaches: Those are all reasonable approaches, and so it really comes down to what your risk tolerance is (a.k.a. \"\"Can I sleep comfortably at night without staying up worrying about my portfolio?\"\"), what your goals for your money are (Just taking care of yourself? Trying to \"\"leave a legacy\"\" via charity or heirs or the like? Wanting a \"\"dream\"\" retirement traveling the world if possible but content to stay home if it's not?), and how confident you are in being able to calculate your \"\"needs\"\" in retirement and what your assets will truly be by then. You ask \"\"if it would be unwise at this stage of my life to create a portfolio that's too conservative\"\", but of course if it's \"\"too conservative\"\" then it would have been unwise. But I don't think it's unwise, at any stage of life, to create a portfolio that's \"\"conservative enough\"\". Only take risks if you have the need, ability, and willingness to do so.\"",
"title": ""
},
{
"docid": "52a68e315eefe0325f56476761a2d3ea",
"text": "Over time, fees are a killer. The $65k is a lot of money, of course, but I'd like to know the fees involved. Are you doubling from 1 to 2%? if so, I'd rethink this. Diversification adds value, I agree, but 2%/yr? A very low cost S&P fund will be about .10%, others may go a bit higher. There's little magic in creating the target allocation, no two companies are going to be exactly the same, just in the general ballpark. I'd encourage you to get an idea of what makes sense, and go DIY. I agree 2% slices of some sectors don't add much, don't get carried away with this.",
"title": ""
},
{
"docid": "d356e065a65de9c35e9d108e23d322f2",
"text": "2 + 20 isn't really a investment style, more of a management style. As CTA I don't have specific experience in the Hedge Fund industry but they are similar. For tech stuff, you may want to check out Interactive Brokers. As for legal stuff, with a CTA you need to have power of attorney form, disclosure documents, risk documents, fees, performance, etc. You basically want to cover your butt and make sure clients understand everything. For regulatory compliance and rules, you would have to consult your apporiate regulatory body. For a CTA its the NFA/CFTC. You should look at getting licensed to provide crediabilty. For a CTA it would be the series 3 license at the very least and I can provide you with a resource for study guides and practice test taking for ALL licenses. I can provide a brief step by step guide later on.",
"title": ""
},
{
"docid": "70423b1c3d64f05ea5ae171e3c0ca8da",
"text": "At your age, I don't think its a bad idea to invest entirely in stocks. The concern with stocks is their volatility, and at 40+ years from retirement, volatility does not concern you. Just remember that if you ever want to call upon your 401(k) for anything other than retirement, such as a down payment on a home (which is a qualified distribution that is not subject to early distribution penalties), then you should reconsider your retirement allocations. I would not invest 100% into stocks if I knew I were going to buy a house in five years and needed that money for a down payment. If your truly saving strictly for a retirement that could occur forty years in the future, first good for you, and second, put it all in an index fund. An S&P index has a ridiculously low expense ratio, and with so many years away from retirement, it gives you an immense amount of flexibility to choose what to do with those funds as your retirement date approaches closer every year.",
"title": ""
},
{
"docid": "4fb93947461cf2614b37f4ea50bbec9b",
"text": "Googling vanguard target asset allocation led me to this page on the Bogleheads wiki which has detailed breakdowns of the Target Retirement funds; that page in turn has a link to this Vanguard PDF which goes into a good level of detail on the construction of these funds' portfolios. I excerpt: (To the question of why so much weight in equities:) In our view, two important considerations justify an expectation of an equity risk premium. The first is the historical record: In the past, and in many countries, stock market investors have been rewarded with such a premium. ... Historically, bond returns have lagged equity returns by about 5–6 percentage points, annualized—amounting to an enormous return differential in most circumstances over longer time periods. Consequently, retirement savers investing only in “safe” assets must dramatically increase their savings rates to compensate for the lower expected returns those investments offer. ... The second strategic principle underlying our glidepath construction—that younger investors are better able to withstand risk—recognizes that an individual’s total net worth consists of both their current financial holdings and their future work earnings. For younger individuals, the majority of their ultimate retirement wealth is in the form of what they will earn in the future, or their “human capital.” Therefore, a large commitment to stocks in a younger person’s portfolio may be appropriate to balance and diversify risk exposure to work-related earnings (To the question of how the exact allocations were decided:) As part of the process of evaluating and identifying an appropriate glide path given this theoretical framework, we ran various financial simulations using the Vanguard Capital Markets Model. We examined different risk-reward scenarios and the potential implications of different glide paths and TDF approaches. The PDF is highly readable, I would say, and includes references to quant articles, for those that like that sort of thing.",
"title": ""
}
] |
fiqa
|
e3ae094717fa7fe7614cc6f8907d616b
|
Optical Flow Estimation Using a Spatial Pyramid Network
|
[
{
"docid": "c29349c32074392e83f51b1cd214ec8a",
"text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"title": ""
}
] |
[
{
"docid": "0e218dd5654ae9125d40bdd5c0a326d6",
"text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.",
"title": ""
},
{
"docid": "37e65ab2fc4d0a9ed5b8802f41a1a2a2",
"text": "This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its \"adolescence\" (Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artificial Intelligence in Medicine 1993;5:93-106). In this article, the discussants reflect on medical AI research during the subsequent years and characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.",
"title": ""
},
{
"docid": "9c2debf407dce58d77910ccdfc55a633",
"text": "In cybersecurity competitions, participants either create new or protect preconfigured information systems and then defend these systems against attack in a real-world setting. Institutions should consider important structural and resource-related issues before establishing such a competition. Critical infrastructures increasingly rely on information systems and on the Internet to provide connectivity between systems. Maintaining and protecting these systems requires an education in information warfare that doesn't merely theorize and describe such concepts. A hands-on, active learning experience lets students apply theoretical concepts in a physical environment. Craig Kaucher and John Saunders found that even for management-oriented graduate courses in information assurance, such an experience enhances the students' understanding of theoretical concepts. Cybersecurity exercises aim to provide this experience in a challenging and competitive environment. Many educational institutions use and implement these exercises as part of their computer science curriculum, and some are organizing competitions with commercial partners as capstone exercises, ad hoc hack-a-thons, and scenario-driven, multiday, defense-only competitions. Participants have exhibited much enthusiasm for these exercises, from the DEFCON capture-the-flag exercise to the US Military Academy's Cyber Defense Exercise (CDX). In February 2004, the US National Science Foundation sponsored the Cyber Security Exercise Workshop aimed at harnessing this enthusiasm and interest. The educators, students, and government and industry representatives attending the workshop discussed the feasibility and desirability of establishing regular cybersecurity exercises for postsecondary-level students. This article summarizes the workshop report.",
"title": ""
},
{
"docid": "517d9e98352aa626cecae9e17cbbbc97",
"text": "The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network. In natural language processing, sequence-to-sequence (Seq2Seq) models typically serve as encoderdecoder networks. When combined with a traditional (deterministic) attention mechanism, the variational latent space may be bypassed by the attention model, and thus becomes ineffective. In this paper, we propose a variational attention mechanism for VED, where the attention vector is also modeled as Gaussian distributed random variables. Results on two experiments show that, without loss of quality, our proposed method alleviates the bypassing phenomenon as it increases the diversity of generated sentences.1",
"title": ""
},
{
"docid": "10f3cafc05b3fb3b235df34aebbe0e23",
"text": "To cope with monolithic controller replicas and the current unbalance situation in multiphase converters, a pseudo-ramp current balance technique is proposed to achieve time-multiplexing current balance in voltage-mode multiphase DC-DC buck converter. With only one modulation controller, silicon area and power consumption caused by the replicas of controller can be reduced significantly. Current balance accuracy can be further enhanced since the mismatches between different controllers caused by process, voltage, and temperature variations are removed. Moreover, the offset cancellation control embedded in the current matching unit is used to eliminate intrinsic offset voltage existing at the operational transconductance amplifier for improved current balance. An explicit model, which contains both voltage and current balance loops with non-ideal effects, is derived for analyzing system stability. Experimental results show that current difference between each phase can be decreased by over 83% under both heavy and light load conditions.",
"title": ""
},
{
"docid": "227eafe2c4a014339dcd241e465c2f5c",
"text": "Language is a means of expression. We express our feelings, emotions, thoughts, needs, desires etc. in words, symbols and gesture which is considered as language. Language can be defined as verbal, physical, biologically innate, and a basic form of communication. Culture is the characteristics of a particular group of people, defined by everything from language, religion, cuisine, social habits, music and arts. Thus culture finds its expression in language; so, learning a new language without familiarity with its culture remains incomplete. An important question arises here, is it necessary to learn about the culture of the target language to acquire English as a foreign or second language? There are great discussions by many scholars and researchers on this topic for decades. This article aims at defining culture, its relationship with language and what role it plays in teaching and learning English as a foreign or second language. This also shed light on how to teach culture in English language classroom.",
"title": ""
},
{
"docid": "01f9b07bc5c6ca47a6181deb908445e8",
"text": "This paper deals with deep neural networks for predicting accurate dense disparity map with Semi-global matching (SGM). SGM is a widely used regularization method for real scenes because of its high accuracy and fast computation speed. Even though SGM can obtain accurate results, tuning of SGMs penalty-parameters, which control a smoothness and discontinuity of a disparity map, is uneasy and empirical methods have been proposed. We propose a learning based penalties estimation method, which we call SGM-Nets that consist of Convolutional Neural Networks. A small image patch and its position are input into SGMNets to predict the penalties for the 3D object structures. In order to train the networks, we introduce a novel loss function which is able to use sparsely annotated disparity maps such as captured by a LiDAR sensor in real environments. Moreover, we propose a novel SGM parameterization, which deploys different penalties depending on either positive or negative disparity changes in order to represent the object structures more discriminatively. Our SGM-Nets outperformed state of the art accuracy on KITTI benchmark datasets.",
"title": ""
},
{
"docid": "4927fee47112be3d859733c498fbf594",
"text": "To design effective tools for detecting and recovering from software failures requires a deep understanding of software bug characteristics. We study software bug characteristics by sampling 2,060 real world bugs in three large, representative open-source projects—the Linux kernel, Mozilla, and Apache. We manually study these bugs in three dimensions—root causes, impacts, and components. We further study the correlation between categories in different dimensions, and the trend of different types of bugs. The findings include: (1) semantic bugs are the dominant root cause. As software evolves, semantic bugs increase, while memory-related bugs decrease, calling for more research effort to address semantic bugs; (2) the Linux kernel operating system (OS) has more concurrency bugs than its non-OS counterparts, suggesting more effort into detecting concurrency bugs in operating system code; and (3) reported security bugs are increasing, and the majority of them are caused by semantic bugs, suggesting more support to help developers diagnose and fix security bugs, especially semantic security bugs. In addition, to reduce the manual effort in building bug benchmarks for evaluating bug detection and diagnosis tools, we use machine learning techniques to classify 109,014 bugs automatically.",
"title": ""
},
{
"docid": "14cc42c141a420cb354473a38e755091",
"text": "During software evolution, information about changes between different versions of a program is useful for a number of software engineering tasks. For example, configuration-management systems can use change information to assess possible conflicts among updates from different users. For another example, in regression testing, knowledge about which parts of a program are unchanged can help in identifying test cases that need not be rerun. For many of these tasks, a purely syntactic differencing may not provide enough information for the task to be performed effectively. This problem is especially relevant in the case of object-oriented software, for which a syntactic change can have subtle and unforeseen effects. In this paper, we present a technique for comparing object-oriented programs that identifies both differences and correspondences between two versions of a program. The technique is based on a representation that handles object-oriented features and, thus, can capture the behavior of object-oriented programs. We also present JDiff, a tool that implements the technique for Java programs. Finally, we present the results of four empirical studies, performed on many versions of two medium-sized subjects, that show the efficiency and effectiveness of the technique when used on real programs.",
"title": ""
},
{
"docid": "238aac56366875b1714284d3d963fe9b",
"text": "We construct a general-purpose multi-input functional encryption scheme in the private-key setting. Namely, we construct a scheme where a functional key corresponding to a function f enables a user holding encryptions of $$x_1, \\ldots , x_t$$ x1,…,xt to compute $$f(x_1, \\ldots , x_t)$$ f(x1,…,xt) but nothing else. This is achieved starting from any general-purpose private-key single-input scheme (without any additional assumptions) and is proven to be adaptively secure for any constant number of inputs t. Moreover, it can be extended to a super-constant number of inputs assuming that the underlying single-input scheme is sub-exponentially secure. Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security assumptions and functionality. Previous and concurrent constructions of multi-input functional encryption schemes either rely on stronger assumptions and provided weaker security guarantees (Goldwasser et al. in Advances in cryptology—EUROCRYPT, 2014; Ananth and Jain in Advances in cryptology—CRYPTO, 2015), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al. in Advances in cryptology—EUROCRYPT, 2015). In comparison, we present a general transformation that simultaneously relies on weaker assumptions and guarantees stronger security.",
"title": ""
},
{
"docid": "e566bb3425c986c22e76f78183eb2bb7",
"text": "A blog site consists of many individual blog postings. Current blog search services focus on retrieving postings but there is also a need to identify relevant blog sites. Blog site search is similar to resource selection in distributed information retrieval, in that the target is to find relevant collections of documents. We introduce resource selection techniques for blog site search and evaluate their performance. Further, we propose a \"diversity factor\" that measures the topic diversity of each blog site. Our results show that the appropriate combination of the resource selection techniques and the diversity factor can achieve significant improvements in retrieval performance compared to baselines. We also report results using these techniques on the TREC blog distillation task.",
"title": ""
},
{
"docid": "432149654abdfdabb9147a830f50196d",
"text": "In this paper, an advanced High Voltage (HV) IGBT technology, which is focused on low loss and is the ultimate device concept for HV IGBT, is presented. CSTBTTM technology utilizing “ULSI technology” and “Light Punch-Through (LPT) II technology” (i.e. narrow Wide Cell Pitch LPT(II)-CSTBT(III)) for the first time demonstrates breaking through the limitation of HV IGBT's characteristics with voltage ratings ranging from 2500 V up to 6500 V. The improved significant trade-off characteristic between on-state voltage (VCE(sat)) and turn-off loss (EOFF) is achieved by means of a “narrow Wide Cell Pitch CSTBT(III) cell”. In addition, this device achieves a wide operating junction temperature (@218 ∼ 448K) and excellent short circuit behavior with the new cell and vertical designs. The LPT(II) concept is utilized for ensuring controllable IGBT characteristics and achieving a thin N− drift layer. Our results cover design of the Wide Cell Pitch LPT(II)-CSTBT(III) technology and demonstrate high total performance with a great improvement potential.",
"title": ""
},
{
"docid": "4d1a448569c55f919d9ce4da0928c89a",
"text": "The hit, break and cut classes of verbs are grammatically relevant in Kimaragang, as in English. The relevance of such classes for determining how arguments are expressed suggests that the meaning of a verb is composed of (a) systematic components of meaning (the EVENT TEMPLATE); and (b) idiosyncratic properties of the individual root. Assuming this approach to be essentially correct, we compare grammatical phenomena in Kimaragang which are sensitive to verb class membership with phenomena which are not class-sensitive. The tendency that emerges is that class-sensitive alternations do not seem to be affix-dependent, and are quite restricted in their ability to introduce new arguments into the argument structure. 1. Verbs of hitting and breaking in English This paper discusses the relationship between verbal semantics and clause structure in Kimaragang Dusun, an endangered Philippine-type language of northern Borneo. It builds on a classic paper by Charles Fillmore (1970), in which he distinguishes two classes of transitive verbs in English: “surface contact” verbs (e.g., hit, slap, strike, bump, stroke) vs. “change of state” verbs (e.g., break, bend, fold, shatter, crack). Fillmore shows that the members of each class share certain syntactic and semantic properties which distinguish them from members of the other class. He further argues that the correlation between these syntactic and semantic properties supports a view of lexical semantics under which the meaning of a verb is made up of two kinds of elements: (a) systematic components of meaning that are shared by an entire class; and (b) idiosyncratic components that are specific to the individual root. Only the former are assumed to be “grammatically relevant.” This basic insight has been foundational for a large body of subsequent work in the area of lexical semantics. One syntactic test that distinguishes hit verbs from break verbs in English is the “causative alternation”, which is systematically possible with break verbs (John broke the window vs. The 1 I would like to thank Jim Johansson, Farrell Ackerman and John Beavers for helpful discussion of these issues. Thanks also to Jim Johansson for giving me access to his field dictionary (Johansson, n.d.), the source of many of the Kimaragang examples in this paper. Special thanks are due to my primary language consultant, Janama Lantubon. Part of the research for this study was supported by NEH-NSF Documenting Endangered Languages fellowship no. FN-50027-07. The grammar of hitting, breaking and cutting in Kimaragang Dusun 2 window broke) but systematically impossible with hit verbs (John hit the window vs. *The window hit). A second test involves a kind of “possessor ascension”, a paraphrase in which the possessor of a body-part noun can be expressed as direct object. This paraphrase is grammatical with hit verbs (I hit his leg vs. I hit him on the leg) but not with break verbs (I broke his leg vs. *I broke him on the leg). A third diagnostic relates to the potential ambiguity of the passive participle. Participles of both classes take a verbal-eventive reading; but participles of break verbs also allow an adjectival-stative reading (the window is still broken) which is unavailable for participles of hit verbs (*the window is still hit). Semantically, the crucial difference between the two classes is that break verbs entail a result, specifically a “separation in [the] material integrity” of the patient (Hale and Keyser 1987). This entailment cannot be cancelled (e.g., I broke the window with a hammer; #it didn’t faze the window, but the hammer shattered). The hit verbs, in contrast, do not have this entailment (I hit the window with a hammer; it didn’t faze the window, but the hammer shattered). A second difference is that break verbs may impose selectional restrictions based on physical properties of the object (I {folded/?bent/ *broke/*shattered} the blanket) whereas hit verbs do not (I {hit/slapped/struck/beat} the blanket). Selectional restrictions of hit verbs are more likely to be based on physical properties of the instrument. In the years since 1970, these two classes of verbs have continued to be studied and discussed in numerous publications. Additional diagnostics have been identified, including the with/against alternation (examples 1–2; cf. Fillmore 1977:75); the CONATIVE alternation (Mary hit/broke the piñata vs. Mary hit/*broke at the piñata; Guerssel et al. 1985); and the Middle alternation (This glass breaks/*hits easily; Fillmore 1977, Hale and Keyser 1987). These tests and others are summarized in Levin (1993). (1) a. I hit the fence with the stick. b. I hit the stick against the fence. (2) a. I broke the window with the stick. b. #I broke the stick against the window. (not the same meaning!!) Another verb class that has received considerable attention in recent years is the cut class (e.g., Guerssel et al. 1985, Bohnemeyer 2007, Asifa et al. 2007). In this paper I will show that these same three classes (hit, break, cut) are distinguished by a number of grammatical and semantic properties in Kimaragang as well. Section 2 briefly introduces some of the basic assumptions that we will adopt about the structure of verb meanings. Section 3 discusses criteria that distinguish hit verbs from break verbs, and section 4 discusses the properties of the cut verbs. Section 5 introduces another test, which I refer to as the instrumental alternation, which exhibits a different pattern for each of the three classes. Section 6 discusses the tests themselves, trying to identify characteristic properties of the constructions that are sensitive to verb classes, and which distinguish these constructions from those that are not class-sensitive. 2. What do verb classes tell us? Fillmore‟s approach to the study of verb meanings has inspired a large volume of subsequent research; see for example Levin (1993), Levin and Rappaport Hovav (1995, 1998, 2005; henceforth L&RH), and references cited in those works. Much of this research is concerned with exploring the following hypotheses, which were already at least partially articulated in Fillmore (1970): The grammar of hitting, breaking and cutting in Kimaragang Dusun 3 a. Verb meanings are composed of two kinds of information. Some components of meaning are systematic, forming a kind of “event template”, while others are idiosyncratic, specific to that particular root. b. Only systematic components of meaning are “grammatically relevant”, more specifically, relevant to argument realization. c. Grammatically determined verb classes are sets of verbs that share the same template. The systematic aspects of meaning distinguish one class from another, while roots belonging to the same class are distinguished by features of their idiosyncratic meaning. Levin (1993) states: “[T]here is a sense in which the notion of verb class is an artificial construct. Verb classes arise because a set of verbs with one or more shared meaning components show similar behavior... The important theoretical construct is the meaning component, not the verb class...” Identifying semantically determined sets of verbs is thus a first step in understanding what elements of meaning are relevant for determining how arguments will be expressed. Notice that the three prototypical verbs under consideration here (hit, beak, cut) are all transitive verbs, and all three select the same set of semantic roles: agent, patient, plus optional instrument. Thus the event template that defines each class, and allows us to account for the grammatical differences summarized above, must be more than a simple list of semantic roles. In addition to identifying grammatically relevant components of meaning, the study of verb classes is important as a means of addressing the following questions: (a) What is the nature of the “event template”, and how should it be represented? and (b) What morpho-syntactic processes or constructions are valid tests for “grammatical relevance” in the sense intended above? Clearly these three issues are closely inter-related, and cannot be fully addressed in isolation from each other. However, in this paper I will focus primarily on the third question, which I will re-state in the following way: What kinds of grammatical constructions or tests are relevant for identifying semantically-based verb classes? 3. Verbs of hitting and breaking in Kimaragang 3.1 Causative-inchoative alternation Kimaragang is structurally very similar to the languages of the central Philippines. In particular, Kimaragang exhibits the rich Philippine-type voice system in which the semantic role of the subject (i.e., the NP marked for nominative case) is indicated by the voice affixation of the verb. 2 In the Active Voice, an additional “transitivity” prefix occurs on transitive verbs; this prefix is lacking on intransitive verbs. 3 Many verbal roots occur in both transitive and intransitive forms, as illustrated in (3) with the root patay „die; kill‟. In the most productive pattern, and the one of interest to us here, the intransitive form has an inchoative (change of state) meaning while the transitive form has a causative meaning. However, it is important to note that there is no causative morpheme present in these forms (morphological causatives are marked by a different prefix, po-, as discussed in section 6.1). 2 See Kroeger (2005) for a more detailed summary with examples. 3 For details see Kroeger (1996); Kroeger & Johansson (2005). The grammar of hitting, breaking and cutting in Kimaragang Dusun 4 (3) a. Minamatay(<in>m-poN-patay) oku do tasu. 4 <PST>AV-TR-die 1sg.NOM ACC dog „I killed a dog.‟ b. Minatay(<in>m-patay) it tasu. <PST>AV-die NOM dog „The dog died.‟ Virtually all break-type roots allow both the causative and inchoative forms, as illustrated in (6– 7); but hit-type roots generally occur only in the transitive form. Thus just as in English, the causative alternation is highly productive with ",
"title": ""
},
{
"docid": "95513348196c70bb6242137685a6fbe5",
"text": "People speak at different levels of specificity in different situations.1 A conversational agent should have this ability and know when to be specific and when to be general. We propose an approach that gives a neural network–based conversational agent this ability. Our approach involves alternating between data distillation and model training : removing training examples that are closest to the responses most commonly produced by the model trained from the last round and then retrain the model on the remaining dataset. Dialogue generation models trained with different degrees of data distillation manifest different levels of specificity. We then train a reinforcement learning system for selecting among this pool of generation models, to choose the best level of specificity for a given input. Compared to the original generative model trained without distillation, the proposed system is capable of generating more interesting and higher-quality responses, in addition to appropriately adjusting specificity depending on the context. Our research constitutes a specific case of a broader approach involving training multiple subsystems from a single dataset distinguished by differences in a specific property one wishes to model. We show that from such a set of subsystems, one can use reinforcement learning to build a system that tailors its output to different input contexts at test time. Depending on their knowledge, interlocutors, mood, etc.",
"title": ""
},
{
"docid": "91c0870355730f553f1dc104318bc55c",
"text": "This paper reviews the main psychological phenomena of inductive reasoning, covering 25 years of experimental and model-based research, in particular addressing four questions. First, what makes a case or event generalizable to other cases? Second, what makes a set of cases generalizable? Third, what makes a property or predicate projectable? Fourth, how do psychological models of induction address these results? The key results in inductive reasoning are outlined, and several recent models, including a new Bayesian account, are evaluated with respect to these results. In addition, future directions for experimental and model-based work are proposed.",
"title": ""
},
{
"docid": "6e4798c01a0a241d1f3746cd98ba9421",
"text": "BACKGROUND\nLarge blood-based prospective studies can provide reliable assessment of the complex interplay of lifestyle, environmental and genetic factors as determinants of chronic disease.\n\n\nMETHODS\nThe baseline survey of the China Kadoorie Biobank took place during 2004-08 in 10 geographically defined regions, with collection of questionnaire data, physical measurements and blood samples. Subsequently, a re-survey of 25,000 randomly selected participants was done (80% responded) using the same methods as in the baseline. All participants are being followed for cause-specific mortality and morbidity, and for any hospital admission through linkages with registries and health insurance (HI) databases.\n\n\nRESULTS\nOverall, 512,891 adults aged 30-79 years were recruited, including 41% men, 56% from rural areas and mean age was 52 years. The prevalence of ever-regular smoking was 74% in men and 3% in women. The mean blood pressure was 132/79 mmHg in men and 130/77 mmHg in women. The mean body mass index (BMI) was 23.4 kg/m(2) in men and 23.8 kg/m(2) in women, with only 4% being obese (>30 kg/m(2)), and 3.2% being diabetic. Blood collection was successful in 99.98% and the mean delay from sample collection to processing was 10.6 h. For each of the main baseline variables, there is good reproducibility but large heterogeneity by age, sex and study area. By 1 January 2011, over 10,000 deaths had been recorded, with 91% of surviving participants already linked to HI databases.\n\n\nCONCLUSION\nThis established large biobank will be a rich and powerful resource for investigating genetic and non-genetic causes of many common chronic diseases in the Chinese population.",
"title": ""
},
{
"docid": "e483d914e00fa46a6be188fabd396165",
"text": "Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE.",
"title": ""
},
{
"docid": "215b02216c68ba6eb2d040e8e01c1ac1",
"text": "Numerous companies are expecting their knowledge management (KM) to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. The KM strategy selection is a kind of multiple criteria decision-making (MCDM) problem, which requires considering a large number of complex factors as multiple evaluation criteria. A robust MCDM method should consider the interactions among criteria. The analytic network process (ANP) is a relatively new MCDM method which can deal with all kinds of interactions systematically. Moreover, the Decision Making Trial and Evaluation Laboratory (DEMATEL) not only can convert the relations between cause and effect of criteria into a visual structural model, but also can be used as a way to handle the inner dependences within a set of criteria. Hence, this paper proposes an effective solution based on a combined ANP and DEMATEL approach to help companies that need to evaluate and select KM strategies. Additionally, an empirical study is presented to illustrate the application of the proposed method. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bde4d0ca50ed483d0df98af7d623e448",
"text": "We propose the use of classifiers and machine learning techniques to extract useful information from data sets (e.g., images) to solve important problems in Image Processing and Computer Vision. We are interested in: two and multiclass image categorization, hidden messages detection, discrimination among natural and forged images, authentication, and multi-classification. Keywords-Machine Learning Techniques; Digital Forensics; Steganalysis; Feature Fusion; Classifier Fusion; Multi-class Classification; Image Categorization.",
"title": ""
}
] |
scidocsrr
|
e52b7f030e81aeb694eb7ea53c4ab32c
|
Identification of embedded mathematical formulas in PDF documents using SVM
|
[
{
"docid": "c0d4f81bb55e1578f2a11dc712937a80",
"text": "Recognizing mathematical expressions in PDF documents is a new and important field in document analysis. It is quite different from extracting mathematical expressions in image-based documents. In this paper, we propose a novel method by combining rule-based and learning-based methods to detect both isolated and embedded mathematical expressions in PDF documents. Moreover, various features of formulas, including geometric layout, character and context content, are used to adapt to a wide range of formula types. Experimental results show satisfactory performance of the proposed method. Furthermore, the method has been successfully incorporated into a commercial software package for large-scale Chinese e-Book production.",
"title": ""
}
] |
[
{
"docid": "3e9845c255b5e816741c04c4f7cf5295",
"text": "This paper presents the packaging technology and the integrated antenna design for a miniaturized 122-GHz radar sensor. The package layout and the assembly process are shortly explained. Measurements of the antenna including the flip chip interconnect are presented that have been achieved by replacing the IC with a dummy chip that only contains a through-line. Afterwards, radiation pattern measurements are shown that were recorded using the radar sensor as transmitter. Finally, details of the fully integrated radar sensor are given, together with results of the first Doppler measurements.",
"title": ""
},
{
"docid": "7dcd4a4e687975b6b774487303fc1a40",
"text": "Analysis of kinship from facial images or videos is an important problem. Prior machine learning and computer vision studies approach kinship analysis as a verification or recognition task. In this paper, first time in the literature, we propose a kinship synthesis framework, which generates smile videos of (probable) children from the smile videos of parents. While the appearance of a child’s smile is learned using a convolutional encoder-decoder network, another neural network models the dynamics of the corresponding smile. The smile video of the estimated child is synthesized by the combined use of appearance and dynamics models. In order to validate our results, we perform kinship verification experiments using videos of real parents and estimated children generated by our framework. The results show that generated videos of children achieve higher correct verification rates than those of real children. Our results also indicate that the use of generated videos together with the real ones in the training of kinship verification models, increases the accuracy, suggesting that such videos can be used as a synthetic dataset.",
"title": ""
},
{
"docid": "bfa178f35027a55e8fd35d1c87789808",
"text": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional reg ularities that are salient in the data.",
"title": ""
},
{
"docid": "f9b01c707482eebb9af472fd019f56eb",
"text": "In this paper we discuss the task of discovering topical influ ence within the online social network T WITTER. The main goal of this research is to discover who the influenti al users are with respect to a certain given topic. For this research we have sampled a portion of the T WIT ER social graph, from which we have distilled topics and topical activity, and constructed a se t of diverse features which we believe are useful in capturing the concept of topical influence. We will use sev eral correlation and classification techniques to determine which features perform best with respect to the TWITTER network. Our findings support the claim that only looking at simple popularity features such a s the number of followers is not enough to capture the concept of topical influence. It appears that mor e int icate features are required.",
"title": ""
},
{
"docid": "3d7b37c5328e3631bd8442e2de67fb62",
"text": "In recent years, deep neural networks have achieved great success in the field of computer vision. However, it is still a big challenge to deploy these deep models on resource-constrained embedded devices such as mobile robots, smart phones and so on. Therefore, network compression for such platforms is a reasonable solution to reduce memory consumption and computation complexity. In this paper, a novel channel pruning method based on genetic algorithm is proposed to compress very deep Convolution Neural Networks (CNNs). Firstly, a pre-trained CNN model is pruned layer by layer according to the sensitivity of each layer. After that, the pruned model is fine-tuned based on knowledge distillation framework. These two improvements significantly decrease the model redundancy with less accuracy drop. Channel selection is a combinatorial optimization problem that has exponential solution space. In order to accelerate the selection process, the proposed method formulates it as a search problem, which can be solved efficiently by genetic algorithm. Meanwhile, a two-step approximation fitness function is designed to further improve the efficiency of genetic process. The proposed method has been verified on three benchmark datasets with two popular CNN models: VGGNet and ResNet. On the CIFAR-100 and ImageNet datasets, our approach outperforms several state-of-the-art methods. On the CIFAR-10 and SVHN datasets, the pruned VGGNet achieves better performance than the original model with 8× parameters compression and 3× FLOPs reduction.",
"title": ""
},
{
"docid": "a1c917d7a685154060ddd67d631ea061",
"text": "In this paper, for finding the place of plate, a real time and fast method is expressed. In our suggested method, the image is taken to HSV color space; then, it is broken into blocks in a stable size. In frequent process, each block, in special pattern is probed. With the appearance of pattern, its neighboring blocks according to geometry of plate as a candidate are considered and increase blocks, are omitted. This operation is done for all of the uncontrolled blocks of images. First, all of the probable candidates are exploited; then, the place of plate is obtained among exploited candidates as density and geometry rate. In probing every block, only its lip pixel is studied which consists 23.44% of block area. From the features of suggestive method, we can mention the lack of use of expensive operation in image process and its low dynamic that it increases image process speed. This method is examined on the group of picture in background, distance and point of view. The rate of exploited plate reached at 99.33% and character recognition rate achieved 97%.",
"title": ""
},
{
"docid": "aa9450cdbdb1162015b4d931c32010fb",
"text": "The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed. Measurements indicate the validity range of the analytical models.",
"title": ""
},
{
"docid": "6fe413cf75a694217c30a9ef79fab589",
"text": "Zusammenfassung) Biometrics have been used for secure identification and authentication for more than two decades since biometric data is unique, non-transferable, unforgettable, and always with us. Recently, biometrics has pervaded other aspects of security applications that can be listed under the topic of “Biometric Cryptosystems”. Although the security of some of these systems is questionable when they are utilized alone, integration with other technologies such as digital signatures or Identity Based Encryption (IBE) schemes results in cryptographically secure applications of biometrics. It is exactly this field of biometric cryptosystems that we focused in this thesis. In particular, our goal is to design cryptographic protocols for biometrics in the framework of a realistic security model with a security reduction. Our protocols are designed for biometric based encryption, signature and remote authentication. We first analyze the recently introduced biometric remote authentication schemes designed according to the security model of Bringer et al.. In this model, we show that one can improve the database storage cost significantly by designing a new architecture, which is a two-factor authentication protocol. This construction is also secure against the new attacks we present, which disprove the claimed security of remote authentication schemes, in particular the ones requiring a secure sketch. Thus, we introduce a new notion called “Weak-identity Privacy” and propose a new construction by combining cancelable biometrics and distributed remote authentication in order to obtain a highly secure biometric authentication system. We continue our research on biometric remote authentication by analyzing the security issues of multi-factor biometric authentication (MFBA). We formally describe the security model for MFBA that captures simultaneous attacks against these systems and define the notion of user privacy, where the goal of the adversary is to impersonate a client to the server. We design a new protocol by combining bipartite biotokens, homomorphic encryption and zero-knowledge proofs and provide a security reduction to achieve user privacy. The main difference of this MFBA protocol is that the server-side computations are performed in the encrypted domain but without requiring a decryption key for the authentication decision of the server. Thus, leakage of the secret key of any system component does not affect the security of the scheme as opposed to the current biometric systems involving crypto-",
"title": ""
},
{
"docid": "7f09bdd6a0bcbed0d9525c5d20cf8cbb",
"text": "Distributed are increasing being thought of as a platform for decentralised applications — DApps — and the the focus for many is shifting from Bitcoin to Smart Contracts. It’s thought that encoding contracts and putting them “on the blockchain” will result in a new generation of organisations that are leaner and more efficient than their forebears (“Capps”?”), disrupting these forebears in the process. However, the most interesting aspect of Bitcoin and blockchain is that it involved no new technology, no new math. Their emergence was due to changes in the environment: the priceperformance and penetration of broadband networks reached a point that it was economically viable for a decentralised solution, such as Bitcoin to compete with traditional payment (international remittance) networks. This is combining with another trend — the shift from monolithic firms to multi-sided markets such as AirBnb et al and the rise of “platform businesses” — to enable a new class of solution to emerge. These new solutions enable firms to interact directly, without the need for a facilitator such as a market, exchange, or even a blockchain. In the past these facilitators were firms. More recently they have been “platform businesses.” In the future they may not exist at all. The shift to a distributed environment enables us to reconsider many of the ideas from distributed AI and linked data. Where are the opportunities? How can we avoid the mistakes of the past?",
"title": ""
},
{
"docid": "b82805187bdfd14a4dd5efc6faf70f10",
"text": "8 Cloud computing has gained tremendous popularity in recent years. By outsourcing computation and 9 storage requirements to public providers and paying for the services used, customers can relish upon the 10 advantages of this new paradigm. Cloud computing provides with a comparably lower-cost, scalable, a 11 location-independent platform for managing clients’ data. Compared to a traditional model of computing, 12 which uses dedicated in-house infrastructure, cloud computing provides unprecedented benefits regarding 13 cost and reliability. Cloud storage is a new cost-effective paradigm that aims at providing high 14 availability, reliability, massive scalability and data sharing. However, outsourcing data to a cloud service 15 provider introduces new challenges from the perspectives of data correctness and security. Over the years, 16 many data integrity schemes have been proposed for protecting outsourced data. This paper aims to 17 enhance the understanding of security issues associated with cloud storage and highlights the importance 18 of data integrity schemes for outsourced data. In this paper, we have presented a taxonomy of existing 19 data integrity schemes use for cloud storage. A comparative analysis of existing schemes is also provided 20 along with a detailed discussion on possible security attacks and their mitigations. Additionally, we have 21 discussed design challenges such as computational efficiency, storage efficiency, communication 22 efficiency, and reduced I/O in these schemes. Furthermore; we have highlighted future trends and open 23 issues, for future research in cloud storage security. 24",
"title": ""
},
{
"docid": "4e8d7e1fdb48da4198e21ae1ef2cd406",
"text": "This paper describes a procedure for the creation of large-scale video datasets for action classification and localization from unconstrained, realistic web data. The scalability of the proposed procedure is demonstrated by building a novel video benchmark, named SLAC (Sparsely Labeled ACtions), consisting of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories. Using our proposed framework, annotating a clip takes merely 8.8 seconds on average. This represents a saving in labeling time of over 95% compared to the traditional procedure of manual trimming and localization of actions. Our approach dramatically reduces the amount of human labeling by automatically identifying hard clips, i.e., clips that contain coherent actions but lead to prediction disagreement between action classifiers. A human annotator can disambiguate whether such a clip truly contains the hypothesized action in a handful of seconds, thus generating labels for highly informative samples at little cost. We show that our large-scale dataset can be used to effectively pretrain action recognition models, significantly improving final metrics on smaller-scale benchmarks after fine-tuning. On Kinetics [14], UCF-101 [30] and HMDB-51 [15], models pre-trained on SLAC outperform baselines trained from scratch, by 2.0%, 20.1% and 35.4% in top-1 accuracy, respectively when RGB input is used. Furthermore, we introduce a simple procedure that leverages the sparse labels in SLAC to pre-train action localization models. On THUMOS14 [12] and ActivityNet-v1.3[2], our localization model improves the mAP of baseline model by 8.6% and 2.5%, respectively.",
"title": ""
},
{
"docid": "1db42d9d65737129fa08a6ad4d52d27e",
"text": "This study introduces a unique prototype system for structural health monitoring (SHM), SmartSync, which uses the building’s existing Internet backbone as a system of virtual instrumentation cables to permit modular and largely plug-and-play deployments. Within this framework, data streams from distributed heterogeneous sensors are pushed through network interfaces in real time and seamlessly synchronized and aggregated by a centralized server, which performs basic data acquisition, event triggering, and database management while also providing an interface for data visualization and analysis that can be securely accessed. The system enables a scalable approach to monitoring tall and complex structures that can readily interface a variety of sensors and data formats (analog and digital) and can even accommodate variable sampling rates. This study overviews the SmartSync system, its installation/operation in theworld’s tallest building, Burj Khalifa, and proof-of-concept in triggering under dual excitations (wind and earthquake).DOI: 10.1061/(ASCE)ST.1943-541X.0000560. © 2013 American Society of Civil Engineers. CE Database subject headings: High-rise buildings; Structural health monitoring; Wind loads; Earthquakes. Author keywords: Tall buildings; Structural health monitoring; System identification.",
"title": ""
},
{
"docid": "61ffc67f0e242afd8979d944cbe2bff4",
"text": "Diprosopus is a rare congenital malformation associated with high mortality. Here, we describe a patient with diprosopus, multiple life-threatening anomalies, and genetic mutations. Prenatal diagnosis and counseling made a beneficial impact on the family and medical providers in the care of this case.",
"title": ""
},
{
"docid": "34bf7fb014f5b511943526c28407cb4b",
"text": "Mobile devices can be maliciously exploited to violate the privacy of people. In most attack scenarios, the adversary takes the local or remote control of the mobile device, by leveraging a vulnerability of the system, hence sending back the collected information to some remote web service. In this paper, we consider a different adversary, who does not interact actively with the mobile device, but he is able to eavesdrop the network traffic of the device from the network side (e.g., controlling a Wi-Fi access point). The fact that the network traffic is often encrypted makes the attack even more challenging. In this paper, we investigate to what extent such an external attacker can identify the specific actions that a user is performing on her mobile apps. We design a system that achieves this goal using advanced machine learning techniques. We built a complete implementation of this system, and we also run a thorough set of experiments, which show that our attack can achieve accuracy and precision higher than 95%, for most of the considered actions. We compared our solution with the three state-of-the-art algorithms, and confirming that our system outperforms all these direct competitors.",
"title": ""
},
{
"docid": "0815549f210c57b28a7e2fc87c20f616",
"text": "Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time–frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.",
"title": ""
},
{
"docid": "ef584ca8b3e9a7f8335549927df1dc16",
"text": "Rapid evolution in technology and the internet brought us to the era of online services. E-commerce is nothing but trading goods or services online. Many customers share their good or bad opinions about products or services online nowadays. These opinions become a part of the decision-making process of consumer and make an impact on the business model of the provider. Also, understanding and considering reviews will help to gain the trust of the customer which will help to expand the business. Many users give reviews for the single product. Such thousands of review can be analyzed using big data effectively. The results can be presented in a convenient visual form for the non-technical user. Thus, the primary goal of research work is the classification of customer reviews given for the product in the map-reduce framework.",
"title": ""
},
{
"docid": "c2571f794304a6b0efdc4fe22bac89e5",
"text": "PURPOSE\nThe aim of this study was to analyse the psychometric properties of the Portuguese version of the body image scale (BIS; Hopwood, P., Fletcher, I., Lee, A., Al Ghazal, S., 2001. A body image scale for use with cancer patients. European Journal of Cancer, 37, 189-197). This is a brief and psychometric robust measure of body image for use with cancer patients, independently of age, cancer type, treatment or stage of the disease and it was developed in collaboration with the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Study Group.\n\n\nMETHOD\nThe sample is comprised of 173 Portuguese postoperative breast cancer patients that completed a battery of measures that included the BIS and other scales of body image and quality of life, in order to explore its construct validity.\n\n\nRESULTS\nThe Portuguese version of BIS confirmed the original unidimensional structure and demonstrated adequate internal consistency, both in the global sample (alpha=.93) as in surgical subgroups (mastectomy=.92 and breast-conserving surgery=.93). Evidence for the construct validity was provided through moderate to largely sized correlations between the BIS and other related measures. In further support of its discriminant validity, significant differences in BIS scores were found between women who underwent mastectomy and those who underwent breast-conserving surgery, with the former presenting higher scores. Age and time since diagnosis were not associated with BIS scores.\n\n\nCONCLUSIONS\nThe Portuguese BIS proved to be a reliable and valid measure of body image concerns in a sample of breast cancer patients, allowing a brief and comprehensive assessment, both on clinical and research settings.",
"title": ""
},
{
"docid": "ed65e73d6e78f44390d1734bfad77b54",
"text": "Frequency overlap across wireless networks with different radio technologies can cause severe interference and reduce communication reliability. The circumstances are particularly unfavorable for ZigBee networks that share the 2.4 GHz ISM band with WiFi senders capable of 10 to 100 times higher transmission power. Our work first examines the interference patterns between ZigBee and WiFi networks at the bit-level granularity. Under certain conditions, ZigBee activities can trigger a nearby WiFi transmitter to back off, in which case the header is often the only part of the Zig-Bee packet being corrupted. We call this the symmetric interference regions, in comparison to the asymmetric regions where the ZigBee signal is too weak to be detected by WiFi senders, but WiFi activity can uniformly corrupt any bit in a ZigBee packet. With these observations, we design BuzzBuzz to mitigate WiFi interference through header and payload redundancy. Multi-Headers provides header redundancy giving ZigBee nodes multiple opportunities to detect incoming packets. Then, TinyRS, a full-featured Reed Solomon library for resource-constrained devices, helps decoding polluted packet payload. On a medium-sized testbed, BuzzBuzz improves the ZigBee network delivery rate by 70%. Furthermore, BuzzBuzz reduces ZigBee retransmissions by a factor of three, which increases the WiFi throughput by 10%.",
"title": ""
},
{
"docid": "db6633228791ca2c725a804f3e58252e",
"text": "Developments and new advances in medical technology and the improvement of people’s living standards have helped to make many people healthier. However, there are still large design deficiencies due to the imbalanced distribution of medical resources, especially in developing countries. To address this issue, a video conference-based telemedicine system is deployed to break the limitations of medical resources in terms of time and space. By outsourcing medical resources from big hospitals to rural and remote ones, centralized and high quality medical resources can be shared to achieve a higher salvage rate while improving the utilization of medical resources. Though effective, existing telemedicine systems only treat patients’ physiological diseases, leaving another challenging problem unsolved: How to remotely detect patients’ emotional state to diagnose psychological diseases. In this paper, we propose a novel healthcare system based on a 5G Cognitive System (5G-Csys). The 5G-Csys consists of a resource cognitive engine and a data cognitive engine. Resource cognitive intelligence, based on the learning of network contexts, aims at ultra-low latency and ultra-high reliability for cognitive applications. Data cognitive intelligence, based on the analysis of healthcare big data, is used to handle a patient’s health status physiologically and psychologically. In this paper, the architecture of 5G-Csys is first presented, and then the key technologies and application scenarios are discussed. To verify our proposal, we develop a prototype platform of 5G-Csys, incorporating speech emotion recognition. We present our experimental results to demonstrate the effectiveness of the proposed system. We hope this paper will attract further research in the field of healthcare based on 5G cognitive systems.",
"title": ""
}
] |
scidocsrr
|
66f65d037d045dcdfd9347297b45ef8e
|
Application of knowledge-based approaches in software architecture: A systematic mapping study
|
[
{
"docid": "ca6b556eb4de9a8f66aefd5505c20f3d",
"text": "Knowledge is a broad and abstract notion that has defined epistemological debate in western philosophy since the classical Greek era. In the past Richard Watson was the accepting senior editor for this paper. MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html few years, however, there has been a growing interest in treating knowledge as a significant organizational resource. Consistent with the interest in organizational knowledge and knowledge management (KM), IS researchers have begun promoting a class of information systems, referred to as knowledge management systems (KMS). The objective of KMS is to support creation, transfer, and application of knowledge in organizations. Knowledge and knowledge management are complex and multi-faceted concepts. Thus, effective development and implementation of KMS requires a foundation in several rich",
"title": ""
}
] |
[
{
"docid": "44b71e1429f731cc2d91f919182f95a4",
"text": "Power management of multi-core processors is extremely important because it allows power/energy savings when all cores are not used. OS directed power management according to ACPI (Advanced Power and Configurations Interface) specifications is the common approach that industry has adopted for this purpose. While operating systems are capable of such power management, heuristics for effectively managing the power are still evolving. The granularity at which the cores are slowed down/turned off should be designed considering the phase behavior of the workloads. Using 3-D, video creation, office and e-learning applications from the SYSmark benchmark suite, we study the challenges in power management of a multi-core processor such as the AMD Quad-Core Opteron\" and Phenom\". We unveil effects of the idle core frequency on the performance and power of the active cores. We adjust the idle core frequency to have the least detrimental effect on the active core performance. We present optimized hardware and operating system configurations that reduce average active power by 30% while reducing performance by an average of less than 3%. We also present complete system measurements and power breakdown between the various systems components using the SYSmark and SPEC CPU workloads. It is observed that the processor core and the disk consume the most power, with core having the highest variability.",
"title": ""
},
{
"docid": "073e3296fc2976f0db2f18a06b0cb816",
"text": "Nowadays spoofing detection is one of the priority research areas in the field of automatic speaker verification. The success of Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof) Challenge 2015 confirmed the impressive perspective in detection of unforeseen spoofing trials based on speech synthesis and voice conversion techniques. However, there is a small number of researches addressed to replay spoofing attacks which are more likely to be used by non-professional impersonators. This paper describes the Speech Technology Center (STC) anti-spoofing system submitted for ASVspoof 2017 which is focused on replay attacks detection. Here we investigate the efficiency of a deep learning approach for solution of the mentioned-above task. Experimental results obtained on the Challenge corpora demonstrate that the selected approach outperforms current state-of-the-art baseline systems in terms of spoofing detection quality. Our primary system produced an EER of 6.73% on the evaluation part of the corpora which is 72% relative improvement over the ASVspoof 2017 baseline system.",
"title": ""
},
{
"docid": "bfa2f3edf0bd1c27bfe3ab90dde6fd75",
"text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.",
"title": ""
},
{
"docid": "4f84d3a504cf7b004a414346bb19fa94",
"text": "Abstract—The electric power supplied by a photovoltaic power generation systems depends on the solar irradiation and temperature. The PV system can supply the maximum power to the load at a particular operating point which is generally called as maximum power point (MPP), at which the entire PV system operates with maximum efficiency and produces its maximum power. Hence, a Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. The proposed MPPT controller is designed for 10kW solar PV system installed at Cape Institute of Technology. This paper presents the fuzzy logic based MPPT algorithm. However, instead of one type of membership function, different structures of fuzzy membership functions are used in the FLC design. The proposed controller is combined with the system and the results are obtained for each membership functions in Matlab/Simulink environment. Simulation results are decided that which membership function is more suitable for this system.",
"title": ""
},
{
"docid": "2bbbd2d1accca21cdb614a0324aa1a0d",
"text": "We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU.",
"title": ""
},
{
"docid": "86bdb6616629da9c2574dc722b003ccf",
"text": "This paper considers the problem of extending Training an Agent Manually via Evaluative Reinforcement (TAMER) in continuous state and action spaces. Investigative research using the TAMER framework enables a non-technical human to train an agent through a natural form of human feedback (negative or positive). The advantages of TAMER have been shown on tasks of training agents by only human feedback or combining human feedback with environment rewards. However, these methods are originally designed for discrete state-action, or continuous state-discrete action problems. This paper proposes an extension of TAMER to allow both continuous states and actions, called ACTAMER. The new framework utilizes any general function approximation of a human trainer’s feedback signal. Moreover, a combined capability of ACTAMER and reinforcement learning is also investigated and evaluated. The combination of human feedback and reinforcement learning is studied in both settings: sequential and simultaneous. Our experimental results demonstrate the proposed method successfully allowing a human to train an agent in two continuous state-action domains: Mountain Car and Cart-pole (balancing).",
"title": ""
},
{
"docid": "35b64e16a8a86ddbee49177f75a662fd",
"text": "Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate-based optimization algorithm that uses a trust region-based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling, and central composite design—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process.",
"title": ""
},
{
"docid": "a61c1e5c1eafd5efd8ee7021613cf90d",
"text": "A millimeter-wave (mmW) bandpass filter (BPF) using substrate integrated waveguide (SIW) is proposed in this work. A BPF with three resonators is formed by etching slots on the top metal plane of the single SIW cavity. The filter is investigated with the theory of electric coupling mechanism. The design procedure and design curves of the coupling coefficient (K) and quality factor (Q) are given and discussed here. The extracted K and Q are used to determine the filter circuit dimensions. In order to prove the validity, a SIW BPF operating at 140 GHz is fabricated in a single circuit layer using low temperature co-fired ceramic (LTCC) technology. The measured insertion loss is 1.913 dB at 140 GHz with a fractional bandwidth of 13.03%. The measured results are in good agreement with simulated results in such high frequency.",
"title": ""
},
{
"docid": "a58cbbff744568ae7abd2873d04d48e9",
"text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.",
"title": ""
},
{
"docid": "758eb7a0429ee116f7de7d53e19b3e02",
"text": "With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels.",
"title": ""
},
{
"docid": "fdfcf2f910884bf899623d2711386db2",
"text": "A number of vehicles may be controlled and supervised by traffic security and its management. The License Plate Recognition is broadly employed in traffic management to recognize a vehicle whose owner has despoiled traffic laws or to find stolen vehicles. Vehicle License Plate Detection and Recognition is a key technique in most of the traffic related applications such as searching of stolen vehicles, road traffic monitoring, airport gate monitoring, speed monitoring and automatic parking lots access control. It is simply the ability of automatically extract and recognition of the vehicle license number plate's character from a captured image. Number Plate Recognition method suffered from problem of feature selection process. The current method of number plate recognition system only focus on local, global and Neural Network process of Feature Extraction and process for detection. The Optimized Feature Selection process improves the detection ratio of number plate recognition. In this paper, it is proposed a new methodology for `License Plate Recognition' based on wavelet transform function. This proposed methodology compare with Correlation based method for detection of number plate. Empirical result shows that better performance in comparison of correlation based technique for number plate recognition. Here, it is modified the Matching Technique for numberplate recognition by using Multi-Class RBF Neural Network Optimization.",
"title": ""
},
{
"docid": "fde0b02f0dbf01cd6a20b02a44cdc6cf",
"text": "This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects. We use two devices for capturing such illumination. In the first we photograph an array of mirrored spheres in high dynamic range to capture the spatially varying illumination. In the second, we obtain higher resolution data by capturing images with an high dynamic range omnidirectional camera as it traverses across a plane. For both methods we apply the light field technique to extrapolate the incident illumination to a volume. We render computer-generated objects as illuminated by this captured illumination using a custom shader within an existing global illumination rendering system. To demonstrate our technique we capture several spatially-varying lighting environments with spotlights, shadows, and dappled lighting and use them to illuminate synthetic scenes. We also show comparisons to real objects under the same illumination.",
"title": ""
},
{
"docid": "1ad92c6656e89a40b0a376f8c1693760",
"text": "This paper presents an overview of our work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people. People understand each other in social terms, allowing them to engage others in a variety of complex social interactions including communication, social learning, and cooperation. We present our theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory and demonstrate how this framework can be applied to develop our sociable humanoid robot, Leonardo. We demonstrate the robot’s ability to learn quickly and effectively from natural human instruction using gesture and dialog, and then cooperate to perform a learned task jointly with a person. Such issues must be addressed to enable many new and exciting applications for robots that require them to play a long-term role in people’s daily lives.",
"title": ""
},
{
"docid": "5c76caebe05acd7d09e6cace0cac9fe1",
"text": "A program that detects people in images has a multitude of potential applications, including tracking for biomedical applications or surveillance, activity recognition for person-device interfaces (device control, video games), organizing personal picture collections, and much more. However, detecting people is difficult, as the appearance of a person can vary enormously because of changes in viewpoint or lighting, clothing style, body pose, individual traits, occlusion, and more. It then makes sense that the first people detectors were really detectors of pedestrians, that is, people walking at a measured pace on a sidewalk, and viewed from a fixed camera. Pedestrians are nearly always upright, their arms are mostly held along the body, and proper camera placement relative to pedestrian traffic can virtually ensure a view from the front or from behind (Figure 1). These factors reduce variation of appearance, although clothing, illumination, background, occlusions, and somewhat limited variations of pose still present very significant challenges.",
"title": ""
},
{
"docid": "b4ae619b0b9cc966622feb2dceda0f2e",
"text": "A novel pressure sensing circuit for non-invasive RF/microwave blood glucose sensors is presented in this paper. RF sensors are of interest to researchers for measuring blood glucose levels non-invasively. For the measurements, the finger is a popular site that has a good amount of blood supply. When a finger is placed on top of the RF sensor, the electromagnetic fields radiating from the sensor interact with the blood in the finger and the resulting sensor response depends on the permittivity of the blood. The varying glucose level in the blood results in a permittivity change causing a shift in the sensor's response. Therefore, by observing the sensor's frequency response it may be possible to predict the blood glucose level. However, there are two crucial points in taking and subsequently predicting the blood glucose level. These points are; the position of the finger on the sensor and the pressure applied onto the sensor. A variation in the glucose level causes a very small frequency shift. However, finger positioning and applying inconsistent pressure have more pronounced effect on the sensor response. For this reason, it may not be possible to take a correct reading if these effects are not considered carefully. Two novel pressure sensing circuits are proposed and presented in this paper to accurately monitor the pressure applied.",
"title": ""
},
{
"docid": "855f67a94e8425846584e5c82355fa91",
"text": "This paper is the product of a workshop held in Amsterdam during the Software Technology and Practice Conference (STEP 2003). The purpose of the paper is to propose Bloom's taxonomy levels for the Guide to the Software Engineering Body of Knowledge (SWEBOK) topics for three software engineer profiles: a new graduate, a graduate with four years of experience, and an experienced member of a software engineering process group. Bloom's taxonomy levels are proposed for topics of four knowledge areas of the SWEBOK Guide: software maintenance, software engineering management, software engineering process, and software quality. By proposing Bloom's taxonomy in this way, the paper aims to illustrate how such profiles could be used as a tool in defining job descriptions, software engineering role descriptions within a software engineering process definition, professional development paths, and training programs.",
"title": ""
},
{
"docid": "a7c0bdbf05ce5d8da20a80dcc3bfaec0",
"text": "Neurosurgery is a medical specialty that relies heavily on imaging. The use of computed tomography and magnetic resonance images during preoperative planning and intraoperative surgical navigation is vital to the success of the surgery and positive patient outcome. Augmented reality application in neurosurgery has the potential to revolutionize and change the way neurosurgeons plan and perform surgical procedures in the future. Augmented reality technology is currently commercially available for neurosurgery for simulation and training. However, the use of augmented reality in the clinical setting is still in its infancy. Researchers are now testing augmented reality system prototypes to determine and address the barriers and limitations of the technology before it can be widely accepted and used in the clinical setting.",
"title": ""
},
{
"docid": "3a2729b235884bddc05dbdcb6a1c8fc9",
"text": "The people of Tumaco-La Tolita culture inhabited the borders of present-day Colombia and Ecuador. Already extinct by the time of the Spaniards arrival, they left a huge collection of pottery artifacts depicting everyday life; among these, disease representations were frequently crafted. In this article, we present the results of the personal examination of the largest collections of Tumaco-La Tolita pottery in Colombia and Ecuador; cases of Down syndrome, achondroplasia, mucopolysaccharidosis I H, mucopolysaccharidosis IV, a tumor of the face and a benign tumor in an old woman were found. We believe these to be among the earliest artistic representations of disease.",
"title": ""
},
{
"docid": "28c0ce094c4117157a27f272dbb94b91",
"text": "This paper reports the design of a color dynamic and active-pixel vision sensor (C-DAVIS) for robotic vision applications. The C-DAVIS combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor (APS) pixels patterned with an RGBW color filter array. The C-DAVIS concurrently outputs rolling or global shutter RGBW coded VGA resolution frames and asynchronous monochrome QVGA resolution temporal contrast events. Hence the C-DAVIS is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast. The C-DAVIS chip is fabricated in TowerJazz 0.18um CMOS image sensor technology. An RGBW 2×2-pixel unit measures 20um × 20um. The chip die measures 8mm × 6.2mm.",
"title": ""
},
{
"docid": "d50d3997572847200f12d69f61224760",
"text": "The main function of a network layer is to route packets from the source machine to the destination machine. Algorithms that are used for route selection and data structure are the main parts for the network layer. In this paper we examine the network performance when using three routing protocols, RIP, OSPF and EIGRP. Video, HTTP and Voice application where configured for network transfer. We also examine the behaviour when using link failure/recovery controller between network nodes. The simulation results are analyzed, with a comparison between these protocols on the effectiveness and performance in network implemented.",
"title": ""
}
] |
scidocsrr
|
9ae695d10f01e9cf4ed35160e49fa1ac
|
Dependency-based Gated Recursive Neural Network for Chinese Word Segmentation
|
[
{
"docid": "1aeace70da31d29cb880e61817432bf7",
"text": "This paper investigates improving supervised word segmentation accuracy with unlabeled data. Both large-scale in-domain data and small-scale document text are considered. We present a unified solution to include features derived from unlabeled data to a discriminative learning model. For the large-scale data, we derive string statistics from Gigaword to assist a character-based segmenter. In addition, we introduce the idea about transductive, document-level segmentation, which is designed to improve the system recall for out-ofvocabulary (OOV) words which appear more than once inside a document. Novel features1 result in relative error reductions of 13.8% and 15.4% in terms of F-score and the recall of OOV words respectively.",
"title": ""
},
{
"docid": "1af1ab4da0fe4368b1ad97801c4eb015",
"text": "Standard approaches to Chinese word segmentation treat the problem as a tagging task, assigning labels to the characters in the sequence indicating whether the character marks a word boundary. Discriminatively trained models based on local character features are used to make the tagging decisions, with Viterbi decoding finding the highest scoring segmentation. In this paper we propose an alternative, word-based segmentor, which uses features based on complete words and word sequences. The generalized perceptron algorithm is used for discriminative training, and we use a beamsearch decoder. Closed tests on the first and secondSIGHAN bakeoffs show that our system is competitive with the best in the literature, achieving the highest reported F-scores for a number of corpora.",
"title": ""
},
{
"docid": "e9a9938b77b2f739a83b987455bc2ef7",
"text": "Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this paper, we propose a gated recursive neural network (GRNN) for Chinese word segmentation, which contains reset and update gates to incorporate the complicated combinations of the context characters. Since GRNN is relative deep, we also use a supervised layer-wise training method to avoid the problem of gradient diffusion. Experiments on the benchmark datasets show that our model outperforms the previous neural network models as well as the state-of-the-art methods.",
"title": ""
},
{
"docid": "a9b20ad74b3a448fbc1555b27c4dcac9",
"text": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.",
"title": ""
}
] |
[
{
"docid": "469a764d3c5a7e3048229c6f1ce41bfd",
"text": "A central issue for researchers of human spatial knowledge, whether focused on perceptually guided action or cognitive-map acquisition, is knowledge of egocentric directions, directions from the body to objects and places. Several methods exist for measuring this knowledge. We compared two particularly important methods, manual pointing with a dial and whole-body rotation (body heading), under various conditions of sensory or memory access to targets. In two experiments, blindfolded body rotation resulted in the greatest variability of performance (variable error), while the manual dial resulted in greater consistent bias (constant error). The variability of performance with body rotation was no greater than that of the dial when subjects' memory loads for directions to targets was reduced by allowing them to peek at targets in between trials, point to concurrent auditory targets, or point with their eyes open. In both experiments, errors with the manual dial were greater for directions to targets that were further from the closest orthogonal axis (ahead, behind, right, left), while errors with body rotation with restricted perceptual access were greater for directions to targets that were further from an axis straight ahead of subjects. This suggests that the two methods will produce evidence of different organizational frameworks for egocentric spatial knowledge. Implications for the structures and processes that underlie egocentric spatial knowledge, and are involved in estimating directions, are discussed, as is the value of decomposing absolute errors into variable and constant errors.",
"title": ""
},
{
"docid": "d2c8328984a05e56e449d6558b1e73fb",
"text": "MOTIVATION\nThe development of chemoinformatics has been hampered by the lack of large, publicly available, comprehensive repositories of molecules, in particular of small molecules. Small molecules play a fundamental role in organic chemistry and biology. They can be used as combinatorial building blocks for chemical synthesis, as molecular probes in chemical genomics and systems biology, and for the screening and discovery of new drugs and other useful compounds.\n\n\nRESULTS\nWe describe ChemDB, a public database of small molecules available on the Web. ChemDB is built using the digital catalogs of over a hundred vendors and other public sources and is annotated with information derived from these sources as well as from computational methods, such as predicted solubility and three-dimensional structure. It supports multiple molecular formats and is periodically updated, automatically whenever possible. The current version of the database contains approximately 4.1 million commercially available compounds and 8.2 million counting isomers. The database includes a user-friendly graphical interface, chemical reactions capabilities, as well as unique search capabilities.\n\n\nAVAILABILITY\nDatabase and datasets are available on http://cdb.ics.uci.edu.",
"title": ""
},
{
"docid": "1a69dddb0b690512d29c7f26d64f4a95",
"text": "In the simultaneous message model, two parties holding <i>n</i>-bit integers <i>x,y</i> send messages to a third party, the <i>referee</i>, enabling him to compute a boolean function <i>f(x,y)</i>. Buhrman et al [3] proved the remarkable result that, when <i>f</i> is the equality function, the referee can solve this problem by comparing short \"quantum fingerprints\" sent by the two parties, i.e., there exists a quantum protocol using only <i>O</i>(log <i>n</i>) bits. This is in contrast to the well-known classical case for which Ω(<i>n</i><sup>1/2</sup>) bits are provably necessary for the same problem even with randomization. In this paper we show that short quantum fingerprints can be used to solve the problem for a much larger class of functions. Let <i>R</i><sup><??par line>,<i>pub</i></sup>(<i>f</i>) denote the number of bits needed in the classical case, assuming in addition a common sequence of random bits is known to all parties (the <i>public coin</i> model). We prove that, if <i>R</i><sup><??par line>,<i>pub</i></sup>(<i>f</i>)=<i>O</i>(1), then there exists a quantum protocol for <i>f</i> using only <i>O</i>(log <i>n</i>) bits. As an application we show that <i>O</i>(log <i>n</i>) quantum bits suffice for the bounded Hamming distance function, defined by <i>f(x,y)</i>=1 if and only if <i>x</i> and <i>y</i> have a constant Hamming distance <i>d</i> or less.",
"title": ""
},
{
"docid": "1c075aac5462cf6c6251d6c9c1a679c0",
"text": "Why You Can’t Find a Taxi in the Rain and Other Labor Supply Lessons from Cab Drivers In a seminal paper, Camerer, Babcock, Loewenstein, and Thaler (1997) find that the wage elasticity of daily hours of work New York City (NYC) taxi drivers is negative and conclude that their labor supply behavior is consistent with target earning (having reference dependent preferences). I replicate and extend the CBLT analysis using data from all trips taken in all taxi cabs in NYC for the five years from 2009-2013. Using the model of expectations-based reference points of Koszegi and Rabin (2006), I distinguish between anticipated and unanticipated daily wage variation and present evidence that only a small fraction of wage variation (about 1/8) is unanticipated so that reference dependence (which is relevant only in response to unanticipated variation) can, at best, play a limited role in determining labor supply. The overall pattern in my data is clear: drivers tend to respond positively to unanticipated as well as anticipated increases in earnings opportunities. This is consistent with the neoclassical optimizing model of labor supply and does not support the reference dependent preferences model. I explore heterogeneity across drivers in their labor supply elasticities and consider whether new drivers differ from more experienced drivers in their behavior. I find substantial heterogeneity across drivers in their elasticities, but the estimated elasticities are generally positive and only rarely substantially negative. I also find that new drivers with smaller elasticities are more likely to exit the industry while drivers who remain learn quickly to be better optimizers (have positive labor supply elasticities that grow with experience). JEL Classification: J22, D01, D03",
"title": ""
},
{
"docid": "fe955c558353d973b5380acdfb1e4d4d",
"text": "Device scaling with manufacturing methods that overcome the inherent limits of optical lithography is a constant focus of the microelectronics industry. The authors of this tutorial article review the state-of-the-art and major challenges of subresolution lithography, and discuss modern double-patterning lithography solutions and supporting EDA tools to surpass them.",
"title": ""
},
{
"docid": "8fb7249b1caefa84ffa13eff7e026e8e",
"text": "Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: An academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.",
"title": ""
},
{
"docid": "7d2c39d173744a78e2415e68f075b474",
"text": "We address the problem of designing artificial agents capable of reproducing human behavior in a competitive game involving dynamic control. Given data consisting of multiple realizations of inputs generated by pairs of interacting players, we model each agent’s actions as governed by a time-varying latent goal state coupled to a control model. These goals, in turn, are described as stochastic processes evolving according to player-specific value functions depending on the current state of the game. We model these value functions using generative adversarial networks (GANs) and show that our GAN-based approach succeeds in producing sample gameplay that captures the rich dynamics of human agents. The latent goal dynamics inferred and generated by our model has applications to fields like neuroscience and animal behavior, where the underlying value functions themselves are of theoretical interest.",
"title": ""
},
{
"docid": "63e4183beadb30244730de8ac86b20ee",
"text": "Softwares use cryptographic algorithms to secure their communications and to protect their internal data. However the algorithm choice, its implementation design and the generation methods of its input parameters may have dramatic consequences on the security of the data it was initially supposed to protect. Therefore to assess the security of a binary program involving cryptography, analysts need to check that none of these points will cause a system vulnerability. It implies, as a first step, to precisely identify and locate the cryptographic code in the binary program. Since binary analysis is a difficult and cumbersome task, it is interesting to devise a method to automatically retrieve cryptographic primitives and their parameters.\n In this paper, we present a novel approach to automatically identify symmetric cryptographic algorithms and their parameters inside binary code. Our approach is static and based on DFG isomorphism. To cope with binary codes produced from different source codes and by different compilers and options, the DFG is normalized using code rewrite mechanisms. Our approach differs from previous works, that either use statistical criteria leading to imprecise results, or rely on heavy dynamic instrumentation. To validate our approach, we present experimental results on a set of synthetic samples including several cryptographic algorithms, binary code of well-known cryptographic libraries and reference source implementation compiled using different compilers and options.",
"title": ""
},
{
"docid": "8de48509fcd087f088b4203b306937b7",
"text": "Grid computing proposes a dynamic and earthly distributed organization of resources that harvest ideal CPU cycle to drift advance computing demands and accommodate user's prerequisites. Heterogeneous gridsdemand efficient allocation and scheduling strategies to cope up with the expanding grid automations. In order to obtain optimal scheduling solutions, primary focus of research has shifted towards metaheuristic techniques. The paper uses different parameters to provide analytical study of variants of Ant Colony Optimization for scheduling sequential jobs in grid systems. Based on the literature analysis, one can summarize that ACO is the most convincing technique for schedulingproblems. However, incapacitation of ACO to fix up a systematized startup and poor scattering capability cast down its efficiency. To overpower these constraints researchers have proposed different hybridizations of ACO that manages to sustain more effective results than standalone ACO.",
"title": ""
},
{
"docid": "4b5b8683a2b14e1658cc28e62a5a9379",
"text": "Recent research has Indicated (hat (he permanent magnet motor drives, which include the permanent magnet synchronous motor (PMSM) and the brushless dc motor (BDCM) could become serious competitors to the induction motor for servo applications. The PMSM has a sinusoidal back emf and requires sinusoidal stator currents to produce constant torque while the BDCM has a trapezoidal back emf and requires rectangular stator currents to produce constant torque. The PMSM is very similar to the wound rotor synchronous machine except that the PMSM that is used for servo applications tends not to have any damper windings and excitation is provided by a permanent magnet instead of a fi~ld winding. Hence the d, q model of the PMSM can be derived from the well-known model of the synchronous machine with the equations of the damper windings and field current dynamics removed. Because of tbe nonsinusoidal variation of the mutual inductances between the stator and rotor in the BDCM, it is also shown In this paper that no particular advantage exists in transforming the abc equations of the BCDM to the d, q frame. Hence the solution of the original abc equations is proposed for the BDCM.",
"title": ""
},
{
"docid": "97382e18c9ca7c42d8b6c908cde761f2",
"text": "In recent years, heatmap regression based models have shown their effectiveness in face alignment and pose estimation. However, Conventional Heatmap Regression (CHR) is not accurate nor stable when dealing with high-resolution facial videos, since it finds the maximum activated location in heatmaps which are generated from rounding coordinates, and thus leads to quantization errors when scaling back to the original high-resolution space. In this paper, we propose a Fractional Heatmap Regression (FHR) for high-resolution video-based face alignment. The proposed FHR can accurately estimate the fractional part according to the 2D Gaussian function by sampling three points in heatmaps. To further stabilize the landmarks among continuous video frames while maintaining the precise at the same time, we propose a novel stabilization loss that contains two terms to address time delay and non-smooth issues, respectively. Experiments on 300W, 300VW and Talking Face datasets clearly demonstrate that the proposed method is more accurate and stable than the state-ofthe-art models. Introduction Face alignment aims to estimate a set of facial landmarks given a face image or video sequence. It is a classic computer vision problem that has attributed to many advanced machine learning algorithms Fan et al. (2018); Bulat and Tzimiropoulos (2017); Trigeorgis et al. (2016); Peng et al. (2015, 2016); Kowalski, Naruniec, and Trzcinski (2017); Chen et al. (2017); Liu et al. (2017); Hu et al. (2018). Nowadays, with the rapid development of consumer hardwares (e.g., mobile phones, digital cameras), High-Resolution (HR) video sequences can be easily collected. Estimating facial landmarks on such highresolution facial data has tremendous applications, e.g., face makeup Chen, Shen, and Jia (2017), editing with special effects Korshunova et al. (2017) in live broadcast videos. However, most existing face alinement methods work on faces with medium image resolutions Chen et al. (2017); Bulat and Tzimiropoulos (2017); Peng et al. (2016); Liu et al. (2017). Therefore, developing face alignment algorithms for high-resolution videos is at the core of this paper. To this end, we propose an accurate and stable algorithm for high-resolution video-based face alignment, named Fractional Heatmap Regression (FHR). It is well known that ∗ indicates equal contributions. Conventional Heatmap Regression (CHR) Loss Fractional Heatmap Regression (FHR) Loss 930 744 411",
"title": ""
},
{
"docid": "88e97dc5105ef142d422bec88e897ddd",
"text": "This paper reports on an experiment realized on the IBM 5Q chip which demonstrates strong evidence for the advantage of using error detection and fault-tolerant design of quantum circuits. By showing that fault-tolerant quantum computation is already within our reach, the author hopes to encourage this approach.",
"title": ""
},
{
"docid": "2802c89f5b943ea0bee357b36d072ada",
"text": "Motivation: Alzheimer’s disease (AD) is an incurable neurological condition which causes progressive mental deterioration, especially in the elderly. The focus of our work is to improve our understanding about the progression of AD. By finding brain regions which degenerate together in AD we can understand how the disease progresses during the lifespan of an Alzheimer’s patient. Our aim is to work towards not only achieving diagnostic performance but also generate useful clinical information. Objective: The main objective of this study is to find important sub regions of the brain which undergo neuronal degeneration together during AD using deep learning algorithms and other machine learning techniques. Methodology: We extract 3D brain region patches from 100 subject MRI images using a predefined anatomical atlas. We have devised an ensemble of pair predictors which use 3D convolutional neural networks to extract salient features for AD from a pair of regions in the brain. We then train them in a supervised manner and use a boosting algorithm to find the weightage of each pair predictor towards the final classification. We use this weightage as the strength of correlation and saliency between the two input sub regions of the pair predictor. Result: We were able to retrieve sub regional association measures for 100 sub region pairs using the proposed method. Our approach was able to automatically learn sub regional association structure in AD directly from images. Our approach also provides an insight into computational methods for demarcating effects of AD from effects of ageing (and other neurological diseases) on our neuroanatomy. Our meta classifier gave a final accuracy of 81.79% for AD classification relative to healthy subjects using a single imaging modality dataset.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "08df1d2819d021711ebfc60589d27e90",
"text": "Polyp has long been considered as one of the major etiologies to colorectal cancer which is a fatal disease around the world, thus early detection and recognition of polyps plays an crucial role in clinical routines. Accurate diagnoses of polyps through endoscopes operated by physicians becomes a chanllenging task not only due to the varying expertise of physicians, but also the inherent nature of endoscopic inspections. To facilitate this process, computer-aid techniques that emphasize on fully-conventional image processing and novel machine learning enhanced approaches have been dedicatedly designed for polyp detection in endoscopic videos or images. Among all proposed algorithms, deep learning based methods take the lead in terms of multiple metrics in evolutions for algorithmic performance. In this work, a highly effective model, namely the faster region-based convolutional neural network (Faster R-CNN) is implemented for polyp detection. In comparison with the reported results of the state-of-the-art approaches on polyps detection, extensive experiments demonstrate that the Faster R-CNN achieves very competing results, and it is an efficient approach for clinical practice.",
"title": ""
},
{
"docid": "9bb86141611c54978033e2ea40f05b15",
"text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.",
"title": ""
},
{
"docid": "f5519eff0c13e0ee42245fdf2627b8ae",
"text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.",
"title": ""
},
{
"docid": "eff8079294d89665bbd8835902c4caa3",
"text": "Due to the growing developments in advanced metering and digital technologies, smart cities have been equipped with different electronic devices on the basis of Internet of Things (IoT), therefore becoming smarter than before. The aim of this article is that of providing a comprehensive review on the concepts of smart cities and on their motivations and applications. Moreover, this survey describes the IoT technologies for smart cities and the main components and features of a smart city. Furthermore, practical experiences over the world and the main challenges are explained.",
"title": ""
}
] |
scidocsrr
|
71a165ec484a007adf759bf85cb55490
|
Multi-domain Dialog State Tracking using Recurrent Neural Networks
|
[
{
"docid": "7f74c519207e469c39f81d52f39438a0",
"text": "Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains.",
"title": ""
},
{
"docid": "771b1e44b26f749f6ecd9fe515159d9c",
"text": "In spoken dialog systems, dialog state tracking refers to the task of correctly inferring the user's goal at a given turn, given all of the dialog history up to that turn. This task is challenging because of speech recognition and language understanding errors, yet good dialog state tracking is crucial to the performance of spoken dialog systems. This paper presents results from the third Dialog State Tracking Challenge, a research community challenge task based on a corpus of annotated logs of human-computer dialogs, with a blind test set evaluation. The main new feature of this challenge is that it studied the ability of trackers to generalize to new entities - i.e. new slots and values not present in the training data. This challenge received 28 entries from 7 research teams. About half the teams substantially exceeded the performance of a competitive rule-based baseline, illustrating not only the merits of statistical methods for dialog state tracking but also the difficulty of the problem.",
"title": ""
}
] |
[
{
"docid": "5945081c099c883d238dca2a1dfc821e",
"text": "Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5 % of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.",
"title": ""
},
{
"docid": "d97be71f6af0cb60a6c732f464e04b99",
"text": "The roles of mental health educators and professionals in the diffusion of mental health mobile apps are addressed in this viewpoint article. Mental health mobile apps are emerging technologies that fit under the broad heading of mobile health (mHealth). mHealth, encompassed within electronic health (eHealth), reflects the use of mobile devices for the practice of public health. Well-designed mental health mobile apps that present content in interactive, engaging, and stimulating ways can promote cognitive learning, personal growth, and mental health enhancement. As key influencers in the mental health social system, counselor educators and professional associations may either help or hinder diffusion of beneficial mHealth technologies. As mental health mobile apps move towards ubiquity, research will continue to be conducted. The studies published thus far, combined with the potential of mental health mobile apps for learning and personal growth, offer enough evidence to compel mental health professionals to infuse these technologies into education and practice. Counselor educators and professional associations must use their influential leadership roles to train students and practitioners in how to research, evaluate, and integrate mental health mobile apps into practice. The objectives of this article are to (1) increase awareness of mHealth and mental health mobile apps, (2) demonstrate the potential for continued growth in mental health mobile apps based on technology use and acceptance theory, mHealth organizational initiatives, and evidence about how humans learn, (3) discuss evidence-based benefits of mental health mobile apps, (4) examine the current state of mHealth diffusion in the mental health profession, and (5) offer solutions for impelling innovation diffusion by infusing mental health mobile apps into education, training, and clinical settings. This discussion has implications for counselor educators, mental health practitioners, associations, continuing education providers, and app developers.",
"title": ""
},
{
"docid": "bfdfac980d1629f85f5bd57705b11b19",
"text": "Deduplication is an approach of avoiding storing data blocks with identical content, and has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, it remains challenging to deploy deduplication in a real system, such as a cloud platform, where VM images are regularly inserted and retrieved. We propose LiveDFS, a live deduplication file system that enables deduplication storage of VM images in an open-source cloud that is deployed under low-cost commodity hardware settings with limited memory footprints. LiveDFS has several distinct features, including spatial locality, prefetching of metadata, and journaling. LiveDFS is POSIXcompliant and is implemented as a Linux kernel-space file system. We deploy our LiveDFS prototype as a storage layer in a cloud platform based on OpenStack, and conduct extensive experiments. Compared to an ordinary file system without deduplication, we show that LiveDFS can save at least 40% of space for storing VM images, while achieving reasonable performance in importing and retrieving VM images. Our work justifies the feasibility of deploying LiveDFS in an open-source cloud.",
"title": ""
},
{
"docid": "fe5b87cacf87c6eab9c252cef41c24d8",
"text": "The Filter Bank Common Spatial Pattern (FBCSP) algorithm employs multiple spatial filters to automatically select key temporal-spatial discriminative EEG characteristics and the Naïve Bayesian Parzen Window (NBPW) classifier using offline learning in EEG-based Brain-Computer Interfaces (BCI). However, it has yet to address the non-stationarity inherent in the EEG between the initial calibration session and subsequent online sessions. This paper presents the FBCSP that employs the NBPW classifier using online adaptive learning that augments the training data with available labeled data during online sessions. However, employing semi-supervised learning that simply augments the training data with available data using predicted labels can be detrimental to the classification accuracy. Hence, this paper presents the FBCSP using online semi-supervised learning that augments the training data with available data that matches the probabilistic model captured by the NBPW classifier using predicted labels. The performances of FBCSP using online adaptive and semi-supervised learning are evaluated on the BCI Competition IV datasets IIa and IIb and compared to the FBCSP using offline learning. The results showed that the FBCSP using online semi-supervised learning yielded relatively better session-to-session classification results compared against the FBCSP using offline learning. The FBCSP using online adaptive learning on true labels yielded the best results in both datasets, but the FBCSP using online semi-supervised learning on predicted labels is more practical in BCI applications where the true labels are not available.",
"title": ""
},
{
"docid": "71553bfb1631b815ea7e4e6fc3035e28",
"text": "Semi-supervised learning is a topic of practical importance because of the difficulty of obtaining numerous labeled data. In this paper, we apply an extension of adversarial autoencoder to semi-supervised learning tasks. In attempt to separate style and content, we divide the latent representation of the autoencoder into two parts. We regularize the autoencoder by imposing a prior distribution on both parts to make them independent. As a result, one of the latent representations is associated with content, which is useful to classify the images. We demonstrate that our method disentangles style and content of the input images and achieves less test error rate than vanilla autoencoder on MNIST semi-supervised classification tasks.",
"title": ""
},
{
"docid": "23677c0107696de3cc630f424484284a",
"text": "With the development of expressway, the vehicle path recognition based on RFID is designed and an Electronic Toll Collection system of expressway will be implemented. It uses a passive RFID tag as carrier to identify Actual vehicle path in loop road. The ETC system will toll collection without parking, also census traffic flow and audit road maintenance fees. It is necessary to improve expressway management.",
"title": ""
},
{
"docid": "72c0fecdbcc27b6af98373dc3c03333b",
"text": "The amino acid sequence of the heavy chain of Bombyx mori silk fibroin was derived from the gene sequence. The 5,263-residue (391-kDa) polypeptide chain comprises 12 low-complexity \"crystalline\" domains made up of Gly-X repeats and covering 94% of the sequence; X is Ala in 65%, Ser in 23%, and Tyr in 9% of the repeats. The remainder includes a nonrepetitive 151-residue header sequence, 11 nearly identical copies of a 43-residue spacer sequence, and a 58-residue C-terminal sequence. The header sequence is homologous to the N-terminal sequence of other fibroins with a completely different crystalline region. In Bombyx mori, each crystalline domain is made up of subdomains of approximately 70 residues, which in most cases begin with repeats of the GAGAGS hexapeptide and terminate with the GAAS tetrapeptide. Within the subdomains, the Gly-X alternance is strict, which strongly supports the classic Pauling-Corey model, in which beta-sheets pack on each other in alternating layers of Gly/Gly and X/X contacts. When fitting the actual sequence to that model, we propose that each subdomain forms a beta-strand and each crystalline domain a two-layered beta-sandwich, and we suggest that the beta-sheets may be parallel, rather than antiparallel, as has been assumed up to now.",
"title": ""
},
{
"docid": "35dc0d377749ebc6a004ce42ee0d55a0",
"text": "Two- and four-pole 0.7-1.1-GHz tunable bandpass-to-bandstop filters with bandwidth control are presented. The bandpass-to-bandstop transformation and the bandwidth control are achieved by adjusting the coupling coefficients in an asymmetrically loaded microstrip resonator. The source/load and input/output coupling coefficients are controlled using an RF microelectromechanical systems (RF MEMS) switch and a series coupling varactor, respectively. The two- and four-pole filters are built on a Duroid substrate with ε r=6.15 and h=25 mil. The tuning for the center frequency and the bandwidth is done using silicon varactor diodes, and RF MEMS switches are used for the bandpass-to-bandstop transformation. In the bandpass mode of the two-pole filter, a center frequency tuning of 0.78-1.10 GHz is achieved with a tunable 1-dB bandwidth of 68-120 MHz at 0.95 GHz. The rejection level of the two-pole bandstop mode is higher than 30 dB. The bandpass mode in the four-pole filter has a center frequency tuning of 0.76-1.08 GHz and a tunable 1-dB bandwidth of 64-115 MHz at 0.94 GHz. The rejection level of the four-pole bandstop mode is larger than 40 dB. The application areas are in wideband cognitive radios under high interference environments.",
"title": ""
},
{
"docid": "b32b16971f9dd1375785a85617b3bd2a",
"text": "White matter hyperintensities (WMHs) in the brain are the consequence of cerebral small vessel disease, and can easily be detected on MRI. Over the past three decades, research has shown that the presence and extent of white matter hyperintense signals on MRI are important for clinical outcome, in terms of cognitive and functional impairment. Large, longitudinal population-based and hospital-based studies have confirmed a dose-dependent relationship between WMHs and clinical outcome, and have demonstrated a causal link between large confluent WMHs and dementia and disability. Adequate differential diagnostic assessment and management is of the utmost importance in any patient, but most notably those with incipient cognitive impairment. Novel imaging techniques such as diffusion tensor imaging might reveal subtle damage before it is visible on standard MRI. Even in Alzheimer disease, which is thought to be primarily caused by amyloid, vascular pathology, such as small vessel disease, may be of greater importance than amyloid itself in terms of influencing the disease course, especially in older individuals. Modification of risk factors for small vessel disease could be an important therapeutic goal, although evidence for effective interventions is still lacking. Here, we provide a timely Review on WMHs, including their relationship with cognitive decline and dementia.",
"title": ""
},
{
"docid": "2b266ebb64f14c3059938b34d72b8b19",
"text": "Preprocessing is an important task and critical step in information retrieval and text mining. The objective of this study is to analyze the effect of preprocessing methods in text classification on Turkish texts. We compiled two large datasets from Turkish newspapers using a crawler. On these compiled data sets and using two additional datasets, we perform a detailed analysis of preprocessing methods such as stemming, stopword filtering and word weighting for Turkish text classification on several different Turkish datasets. We report the results of extensive experiments.",
"title": ""
},
{
"docid": "fdd4c5fc773aa001da927ab3776559ae",
"text": "We treated a 65-year-old Japanese man with a giant penile lymphedema due to chronic penile strangulation with a rubber band. He was referred to our hospital with progressive penile swelling that had developed over a period of 2 years from chronic use of a rubber band placed around the penile base for prevention of urinary incontinence. Under a diagnosis of giant penile lymphedema, we performed resection of abnormal penile skin weighing 4.8 kg, followed by a penile plasty procedure. To the best of our knowledge, this is only the seventh report of such a case worldwide, with the present giant penile lymphedema the most reported.",
"title": ""
},
{
"docid": "a2f9c6f2e4833c0d1e8cd231de81740f",
"text": "This paper presents a project named \"Vision of Future Energy Networks\", which aims at a greenfield approach for future energy systems. The definition of energy hubs and the conception of combined interconnector devices represent key approaches towards a multicarrier greenfield layout. Models and tools for technical, economical and environmental investigations in multicarrier energy systems have been developed and used in various case studies",
"title": ""
},
{
"docid": "2e7513624eed605a4e0da539162dd715",
"text": "In the domain of Internet of Things (IoT), applications are modeled to understand and react based on existing contextual and situational parameters. This work implements a management flow for the abstraction of real world objects and virtual composition of those objects to provide IoT services. We also present a real world knowledge model that aggregates constraints defining a situation, which is then used to detect and anticipate future potential situations. It is implemented based on reasoning and machine learning mechanisms. This work showcases a prototype implementation of the architectural framework in a smart home scenario, targeting two functionalities: actuation and automation based on the imposed constraints and thereby responding to situations and also adapting to the user preferences. It thus provides a productive integration of heterogeneous devices, IoT platforms, and cognitive technologies to improve the services provided to the user.",
"title": ""
},
{
"docid": "5f483cfb3949e8feb109533344aa32be",
"text": "Shadows and highlights represent a challenge to the computer vision researchers due to a variance in the brightness on the surfaces of the objects under consideration. This paper presents a new colour detection and segmentation algorithm for road signs in which the effect of shadows and highlights are neglected to get better colour segmentation results. Images are taken by a digital camera mounted in a car. The RGB images are converted into HSV colour space and the shadow-highlight invariant method is applied to extract the colours of the road signs under shadow and highlight conditions. The method is tested on hundreds of outdoor images under such light conditions, and it shows high robustness; more than 95% of correct segmentation is achieved",
"title": ""
},
{
"docid": "8873369cf69e5de3b97875404f6aea64",
"text": "BACKGROUND\nTobacco smoke exposure (TSE) is a worldwide health problem and it is considered a risk factor for pregnant women's and children's health, particularly for respiratory morbidity during the first year of life. Few significant birth cohort studies on the effect of prenatal TSE via passive and active maternal smoking on the development of severe bronchiolitis in early childhood have been carried out worldwide.\n\n\nMETHODS\nFrom November 2009 to December 2012, newborns born at ≥ 33 weeks of gestational age (wGA) were recruited in a longitudinal multi-center cohort study in Italy to investigate the effects of prenatal and postnatal TSE, among other risk factors, on bronchiolitis hospitalization and/or death during the first year of life.\n\n\nRESULTS\nTwo thousand two hundred ten newborns enrolled at birth were followed-up during their first year of life. Of these, 120 (5.4%) were hospitalized for bronchiolitis. No enrolled infants died during the study period. Prenatal passive TSE and maternal active smoking of more than 15 cigarettes/daily are associated to a significant increase of the risk of offspring children hospitalization for bronchiolitis, with an adjHR of 3.5 (CI 1.5-8.1) and of 1.7 (CI 1.1-2.6) respectively.\n\n\nCONCLUSIONS\nThese results confirm the detrimental effects of passive TSE and active heavy smoke during pregnancy for infants' respiratory health, since the exposure significantly increases the risk of hospitalization for bronchiolitis in the first year of life.",
"title": ""
},
{
"docid": "2cccf75ea5ceeedf8b723dda71e64f3e",
"text": "A memetic algorithm (MA), i.e. an evolutionary algorithm making use of local search, for the quadratic assignment problem is presented. A new recombination operator for realizing the approach is described, and the behavior of the MA is investigated on a set of problem instances containing between 25 and 100 facilities/locations. The results indicate that the proposed MA is able to produce high quality solutions quickly. A comparison of the MA with some of the currently best alternative approaches – reactive tabu search, robust tabu search and the fast ant colony system – demonstrates that the MA outperforms its competitors on all studied problem instances of practical interest.",
"title": ""
},
{
"docid": "083989d115f6942b362c06936b2775ea",
"text": "In humans, nearly two meters of genomic material must be folded to fit inside each micrometer-scale cell nucleus while remaining accessible for gene transcription, DNA replication, and DNA repair. This fact highlights the need for mechanisms governing genome organization during any activity and to maintain the physical organization of chromosomes at all times. Insight into the functions and three-dimensional structures of genomes comes mostly from the application of visual techniques such as fluorescence in situ hybridization (FISH) and molecular approaches including chromosome conformation capture (3C) technologies. Recent developments in both types of approaches now offer the possibility of exploring the folded state of an entire genome and maybe even the identification of how complex molecular machines govern its shape. In this review, we present key methodologies used to study genome organization and discuss what they reveal about chromosome conformation as it relates to transcription regulation across genomic scales in mammals.",
"title": ""
},
{
"docid": "eafa6403e38d2ceb63ef7c00f84efe77",
"text": "We propose a novel approach to learning distributed representations of variable-length text sequences in multiple languages simultaneously. Unlike previous work which often derive representations of multi-word sequences as weighted sums of individual word vectors, our model learns distributed representations for phrases and sentences as a whole. Our work is similar in spirit to the recent paragraph vector approach but extends to the bilingual context so as to efficiently encode meaning-equivalent text sequences of multiple languages in the same semantic space. Our learned embeddings achieve state-of-theart performance in the often used crosslingual document classification task (CLDC) with an accuracy of 92.7 for English to German and 91.5 for German to English. By learning text sequence representations as a whole, our model performs equally well in both classification directions in the CLDC task in which past work did not achieve.",
"title": ""
},
{
"docid": "be7a33cc59e8fb297c994d046c6874d9",
"text": "Purpose: Compressed sensing MRI (CS-MRI) from single and parallel coils is one of the powerful ways to reduce the scan time of MR imaging with performance guarantee. However, the computational costs are usually expensive. This paper aims to propose a computationally fast and accurate deep learning algorithm for the reconstruction of MR images from highly down-sampled k-space data. Theory: Based on the topological analysis, we show that the data manifold of the aliasing artifact is easier to learn from a uniform subsampling pattern with additional low-frequency k-space data. Thus, we develop deep aliasing artifact learning networks for the magnitude and phase images to estimate and remove the aliasing artifacts from highly accelerated MR acquisition. Methods: The aliasing artifacts are directly estimated from the distorted magnitude and phase images reconstructed from subsampled k-space data so that we can get an aliasing-free images by subtracting the estimated aliasing artifact from corrupted inputs. Moreover, to deal with the globally distributed aliasing artifact, we develop a multi-scale deep neural network with a large receptive field. Results: The experimental results confirm that the proposed deep artifact learning network effectively estimates and removes the aliasing artifacts. Compared to existing CS methods from single and multi-coli data, the proposed network shows minimal errors by removing the coherent aliasing artifacts. Furthermore, the computational time is by order of magnitude faster. Conclusion: As the proposed deep artifact learning network immediately generates accurate reconstruction, it has great potential for clinical applications.",
"title": ""
},
{
"docid": "c3a85f2f7e70bc7059604377ac19d994",
"text": "Processor concepts, implementation details, and performance analysis are fundamental in computer architecture education, and MIPS (microprocessor without interlocked pipeline stages) processor designs are used by many universities in teaching the subject. In this paper we present a MIPS32 processor simulator, which enriches students’ learning and instructors’ teaching experiences. A family of single-cycle, multi-cycle, and pipeline processor models for the MIPS32 architecture are developed according to the parallel Discrete Event System Specification (DEVS) modeling formalism. A collection of elementary sequential and combinational model components along with the processor models are implemented in DEVS-Suite. The simulator supports multi-level model abstractions, register-transfer level animation, performance data collection, and time-based trajectory observation. These features, which are partially supported by a few existing simulators, enable important structural and behavioral details of computer architectures to be described and understood. The MIPS processor models can be reused and systematically extended for modeling and simulating other MIPS processors.",
"title": ""
}
] |
scidocsrr
|
03b8497dfb86e54bc80bbbc1730be3b6
|
Modeling Cyber-Physical Systems with Semantic Agents
|
[
{
"docid": "73dd590da37ffec2d698142bee2e23fb",
"text": "Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of interacting autonomous agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to do research. Some have gone so far as to contend that ABMS is a new way of doing science. Computational advances make possible a growing number of agent-based applications across many fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling the growth and decline of ancient civilizations to modeling the complexities of the human immune system, and many more. This tutorial describes the foundations of ABMS, identifies ABMS toolkits and development methods illustrated through a supply chain example, and provides thoughts on the appropriate contexts for ABMS versus conventional modeling techniques.",
"title": ""
}
] |
[
{
"docid": "4487f3713062ef734ceab5c7f9ccc6e3",
"text": "In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as SGD. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. We identify a sharp change in the macroscopic behavior of networks as the covariance between weights changes from zero.",
"title": ""
},
{
"docid": "b6fff873c084e9a44d870ffafadbc9e7",
"text": "A wide variety of smartphone applications today rely on third-party advertising services, which provide libraries that are linked into the hosting application. This situation is undesirable for both the application author and the advertiser. Advertising libraries require their own permissions, resulting in additional permission requests to users. Likewise, a malicious application could simulate the behavior of the advertising library, forging the user’s interaction and stealing money from the advertiser. This paper describes AdSplit, where we extended Android to allow an application and its advertising to run as separate processes, under separate user-ids, eliminating the need for applications to request permissions on behalf of their advertising libraries, and providing services to validate the legitimacy of clicks, locally and remotely. AdSplit automatically recompiles apps to extract their ad services, and we measure minimal runtime overhead. AdSplit also supports a system resource that allows advertisements to display their content in an embedded HTML widget, without requiring any native code.",
"title": ""
},
{
"docid": "47ee1b71ed10b64110b84e5eecf2857c",
"text": "Measurements for future outdoor cellular systems at 28 GHz and 38 GHz were conducted in urban microcellular environments in New York City and Austin, Texas, respectively. Measurements in both line-of-sight and non-line-of-sight scenarios used multiple combinations of steerable transmit and receive antennas (e.g. 24.5 dBi horn antennas with 10.9° half power beamwidths at 28 GHz, 25 dBi horn antennas with 7.8° half power beamwidths at 38 GHz, and 13.3 dBi horn antennas with 24.7° half power beamwidths at 38 GHz) at different transmit antenna heights. Based on the measured data, we present path loss models suitable for the development of fifth generation (5G) standards that show the distance dependency of received power. In this paper, path loss is expressed in easy-to-use formulas as the sum of a distant dependent path loss factor, a floating intercept, and a shadowing factor that minimizes the mean square error fit to the empirical data. The new models are compared with previous models that were limited to using a close-in free space reference distance. Here, we illustrate the differences of the two modeling approaches, and show that a floating intercept model reduces the shadow factors by several dB and offers smaller path loss exponents while simultaneously providing a better fit to the empirical data. The upshot of these new path loss models is that coverage is actually better than first suggested by work in [1], [7] and [8].",
"title": ""
},
{
"docid": "9e451fe70d74511d2cc5a58b667da526",
"text": "Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "b42b17131236abc1ee3066905025aa8c",
"text": "The planet Mars, while cold and arid today, once possessed a warm and wet climate, as evidenced by extensive fluvial features observable on its surface. It is believed that the warm climate of the primitive Mars was created by a strong greenhouse effect caused by a thick CO2 atmosphere. Mars lost its warm climate when most of the available volatile CO2 was fixed into the form of carbonate rock due to the action of cycling water. It is believed, however, that sufficient CO2 to form a 300 to 600 mb atmosphere may still exist in volatile form, either adsorbed into the regolith or frozen out at the south pole. This CO2 may be released by planetary warming, and as the CO2 atmosphere thickens, positive feedback is produced which can accelerate the warming trend. Thus it is conceivable, that by taking advantage of the positive feedback inherent in Mars' atmosphere/regolith CO2 system, that engineering efforts can produce drastic changes in climate and pressure on a planetary scale. In this paper we propose a mathematical model of the Martian CO2 system, and use it to produce analysis which clarifies the potential of positive feedback to accelerate planetary engineering efforts. It is shown that by taking advantage of the feedback, the requirements for planetary engineering can be reduced by about 2 orders of magnitude relative to previous estimates. We examine the potential of various schemes for producing the initial warming to drive the process, including the stationing of orbiting mirrors, the importation of natural volatiles with high greenhouse capacity from the outer solar system, and the production of artificial halocarbon greenhouse gases on the Martian surface through in-situ industry. If the orbital mirror scheme is adopted, mirrors with dimension on the order or 100 km radius are required to vaporize the CO2 in the south polar cap. If manufactured of solar sail like material, such mirrors would have a mass on the order of 200,000 tonnes. If manufactured in space out of asteroidal or Martian moon material, about 120 MWe-years of energy would be needed to produce the required aluminum. This amount of power can be provided by near-term multimegawatt nuclear power units, such as the 5 MWe modules now under consideration for NEP spacecraft. Orbital transfer of very massive bodies from the outer solar system can be accomplished using nuclear thermal rocket engines using the asteroid's volatile material as propellant. Using major planets for gravity assists, the rocket ∆V required to move an outer solar system asteroid onto a collision trajectory with Mars can be as little as 300 m/s. If the asteroid is made of NH3, specific impulses of about 400 s can be attained, and as little as 10% of the asteroid will be required for propellant. Four 5000 MWt NTR engines would require a 10 year burn time to push a 10 billion tonne asteroid through a ∆V of 300 m/s. About 4 such objects would be sufficient to greenhouse Mars. Greenhousing Mars via the manufacture of halocarbon gases on the planet's surface may well be the most practical option. Total surface power requirements to drive planetary warming using this method are calculated and found to be on the order of 1000 MWe, and the required times scale for climate and atmosphere modification is on the order of 50 years. It is concluded that a drastic modification of Martian conditions can be achieved using 21st century technology. The Mars so produced will closely resemble the conditions existing on the primitive Mars. Humans operating on the surface of such a Mars would require breathing gear, but pressure suits would be unnecessary. With outside atmospheric pressures raised, it will be possible to create large dwelling areas by means of very large inflatable structures. Average temperatures could be above the freezing point of water for significant regions during portions of the year, enabling the growth of plant life in the open. The spread of plants could produce enough oxygen to make Mars habitable for animals in several millennia. More rapid oxygenation would require engineering efforts supported by multi-terrawatt power sources. It is speculated that the desire to speed the terraforming of Mars will be a driver for developing such technologies, which in turn will define a leap in human power over nature as dramatic as that which accompanied the creation of post-Renaissance industrial civilization.",
"title": ""
},
{
"docid": "6e4d846272030b160b30d56a60eb2cad",
"text": "MapReduce and Spark are two very popular open source cluster computing frameworks for large scale data analytics. These frameworks hide the complexity of task parallelism and fault-tolerance, by exposing a simple programming API to users. In this paper, we evaluate the major architectural components in MapReduce and Spark frameworks including: shuffle, execution model, and caching, by using a set of important analytic workloads. To conduct a detailed analysis, we developed two profiling tools: (1) We correlate the task execution plan with the resource utilization for both MapReduce and Spark, and visually present this correlation; (2) We provide a break-down of the task execution time for in-depth analysis. Through detailed experiments, we quantify the performance differences between MapReduce and Spark. Furthermore, we attribute these performance differences to different components which are architected differently in the two frameworks. We further expose the source of these performance differences by using a set of micro-benchmark experiments. Overall, our experiments show that Spark is about 2.5x, 5x, and 5x faster than MapReduce, for Word Count, k-means, and PageRank, respectively. The main causes of these speedups are the efficiency of the hash-based aggregation component for combine, as well as reduced CPU and disk overheads due to RDD caching in Spark. An exception to this is the Sort workload, for which MapReduce is 2x faster than Spark. We show that MapReduce’s execution model is more efficient for shuffling data than Spark, thus making Sort run faster on MapReduce.",
"title": ""
},
{
"docid": "6fb1f05713db4e771d9c610fa9c9925d",
"text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.",
"title": ""
},
{
"docid": "68f0f63fcfa29d3867fa7d2dea6807cc",
"text": "We propose a machine learning framework to capture the dynamics of highfrequency limit order books in financial equity markets and automate real-time prediction of metrics such as mid-price movement and price spread crossing. By characterizing each entry in a limit order book with a vector of attributes such as price and volume at different levels, the proposed framework builds a learning model for each metric with the help of multi-class support vector machines (SVMs). Experiments with real data establish that features selected by the proposed framework are effective for short term price movement forecasts.",
"title": ""
},
{
"docid": "35c299197861d0a57763bbc392e90bb2",
"text": "Imperfect-information games, where players have private information, pose a unique challenge in artificial intelligence. In recent years, Heads-Up NoLimit Texas Hold’em poker, a popular version of poker, has emerged as the primary benchmark for evaluating game-solving algorithms for imperfectinformation games. We demonstrate a winning agent from the 2016 Annual Computer Poker Competition, Baby Tartanian8.",
"title": ""
},
{
"docid": "61953c398f2bcd4fd0ff4662689293a0",
"text": "Today's smartphones and mobile devices typically embed advanced motion sensors. Due to their increasing market penetration, there is a potential for the development of distributed sensing platforms. In particular, over the last few years there has been an increasing interest in monitoring vehicles and driving data, aiming to identify risky driving maneuvers and to improve driver efficiency. Such a driver profiling system can be useful in fleet management, insurance premium adjustment, fuel consumption optimization or CO2 emission reduction. In this paper, we analyze how smartphone sensors can be used to identify driving maneuvers and propose SenseFleet, a driver profile platform that is able to detect risky driving events independently from the mobile device and vehicle. A fuzzy system is used to compute a score for the different drivers using real-time context information like route topology or weather conditions. To validate our platform, we present an evaluation study considering multiple drivers along a predefined path. The results show that our platform is able to accurately detect risky driving events and provide a representative score for each individual driver.",
"title": ""
},
{
"docid": "ff4cfe56f31e21a8f69164790eb39634",
"text": "Active individuals often perform exercises in the heat following heat stress exposure (HSE) regardless of the time-of-day and its variation in body temperature. However, there is no information concerning the diurnal effects of a rise in body temperature after HSE on subsequent exercise performance in a hot environnment. This study therefore investigated the diurnal effects of prior HSE on both sprint and endurance exercise capacity in the heat. Eight male volunteers completed four trials which included sprint and endurance cycling tests at 30 °C and 50% relative humidity. At first, volunteers completed a 30-min pre-exercise routine (30-PR): a seated rest in a temperate environment in AM (AmR) or PM (PmR) (Rest trials); and a warm water immersion at 40 °C to induce a 1 °C increase in core temperature in AM (AmW) or PM (PmW) (HSE trials). Volunteers subsequently commenced exercise at 0800 h in AmR/AmW and at 1700 h in PmR/PmW. The sprint test determined a 10-sec maximal sprint power at 5 kp. Then, the endurance test was conducted to measure time to exhaustion at 60% peak oxygen uptake. Maximal sprint power was similar between trials (p = 0.787). Time to exhaustion in AmW (mean±SD; 15 ± 8 min) was less than AmR (38 ± 16 min; p < 0.01) and PmR (43 ± 24 min; p < 0.01) but similar with PmW (24 ± 9 min). Core temperature was higher from post 30-PR to 6 min into the endurance test in AmW and PmW than AmR and PmR (p < 0.05) and at post 30-PR and the start of the endurance test in PmR than AmR (p < 0.05). The rate of rise in core temperature during the endurance test was greater in AmR than AmW and PmW (p < 0.05). Mean skin temperature was higher from post 30-PR to 6 min into the endurance test in HSE trials than Rest trials (p < 0.05). Mean body temperature was higher from post 30-PR to 6 min into the endurance test in AmW and PmW than AmR and PmR (p < 0.05) and the start to 6 min into the endurance test in PmR than AmR (p < 0.05). Convective, radiant, dry and evaporative heat losses were greater on HSE trials than on Rest trials (p < 0.001). Heart rate and cutaneous vascular conductance were higher at post 30-PR in HSE trials than Rest trials (p < 0.05). Thermal sensation was higher from post 30-PR to the start of the endurance test in AmW and PmW than AmR and PmR (p < 0.05). Perceived exertion from the start to 6 min into the endurance test was higher in HSE trials than Rest trials (p < 0.05). This study demonstrates that an approximately 1 °C increase in core temperature by prior HSE has the diurnal effects on endurance exercise capacity but not on sprint exercise capacity in the heat. Moreover, prior HSE reduces endurance exercise capacity in AM, but not in PM. This reduction is associated with a large difference in pre-exercise core temperature between AM trials which is caused by a relatively lower body temperature in the morning due to the time-of-day variation and contributes to lengthening the attainment of high core temperature during exercise in AmR.",
"title": ""
},
{
"docid": "27136e888c3ebfef4ea7105d68a13ffd",
"text": "The huge amount of (potentially) available spectrum makes millimeter wave (mmWave) a promising candidate for fifth generation cellular networks. Unfortunately, differences in the propagation environment as a function of frequency make it hard to make comparisons between systems operating at mmWave and microwave frequencies. This paper presents a simple channel model for evaluating system level performance in mmWave cellular networks. The model uses insights from measurement results that show mmWave is sensitive to blockages revealing very different path loss characteristics between line-of-sight (LOS) and non-line-of-sight (NLOS) links. The conventional path loss model with a single log-distance path loss function and a shadowing term is replaced with a stochastic path loss model with a distance-dependent LOS probability and two different path loss functions to account for LOS and NLOS links. The proposed model is used to compare microwave and mmWave networks in simulations. It is observed that mmWave networks can provide comparable coverage probability with a dense deployment, leading to much higher data rates thanks to the large bandwidth available in the mmWave spectrum.",
"title": ""
},
{
"docid": "1d7d96d37584398359f9b85bc7741578",
"text": "BACKGROUND\nTwo types of soft tissue filler that are in common use are those formulated primarily with calcium hydroxylapatite (CaHA) and those with cross-linked hyaluronic acid (cross-linked HA).\n\n\nOBJECTIVE\nTo provide physicians with a scientific rationale for determining which soft tissue fillers are most appropriate for volume replacement.\n\n\nMATERIALS\nSix cross-linked HA soft tissue fillers (Restylane and Perlane from Medicis, Scottsdale, AZ; Restylane SubQ from Q-Med, Uppsala, Sweden; and Juvéderm Ultra, Juvéderm Ultra Plus, and Juvéderm Voluma from Allergan, Pringy, France) and a soft tissue filler consisting of CaHA microspheres in a carrier gel containing carboxymethyl cellulose (Radiesse, BioForm Medical, Inc., San Mateo, CA). METHODS The viscosity and elasticity of each filler gel were quantified according to deformation oscillation measurements conducted using a Thermo Haake RS600 Rheometer (Newington, NH) using a plate and plate geometry with a 1.2-mm gap. All measurements were performed using a 35-mm titanium sensor at 30°C. Oscillation measurements were taken at 5 pascal tau (τ) over a frequency range of 0.1 to 10 Hz (interpolated at 0.7 Hz). Researchers chose the 0.7-Hz frequency because it elicited the most reproducible results and was considered physiologically relevant for stresses that are common to the skin. RESULTS The rheological measurements in this study support the concept that soft tissue fillers that are currently used can be divided into three groups. CONCLUSION Rheological evaluation enables the clinician to objectively classify soft tissue fillers, to select specific filler products based on scientific principles, and to reliably predict how these products will perform--lifting, supporting, and sculpting--after they are appropriately injected.",
"title": ""
},
{
"docid": "31756ac6aaa46df16337dbc270831809",
"text": "Broadly speaking, the goal of neuromorphic engineering is to build computer systems that mimic the brain. Spiking Neural Network (SNN) is a type of biologically-inspired neural networks that perform information processing based on discrete-time spikes, different from traditional Artificial Neural Network (ANN). Hardware implementation of SNNs is necessary for achieving high-performance and low-power. We present the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays. The Darwin NPU was fabricated by standard 180 nm CMOS technology with an area size of 5 ×5 mm2 and 70 MHz clock frequency at the worst case. It consumes 0.84 mW/MHz with 1.8 V power supply for typical applications. Two prototype applications are used to demonstrate the performance and efficiency of the hardware implementation. 脉冲神经网络(SNN)是一种基于离散神经脉冲进行信息处理的人工神经网络。本文提出的“达尔文”芯片是一款基于SNN的类脑硬件协处理器。它支持神经网络拓扑结构,神经元与突触各种参数的灵活配置,最多可支持2048个神经元,四百万个神经突触及15个不同的突触延迟。该芯片采用180纳米CMOS工艺制造,面积为5x5平方毫米,最坏工作频率达到70MHz,1.8V供电下典型应用功耗为0.84mW/MHz。基于该芯片实现了两个应用案例,包括手写数字识别和运动想象脑电信号分类。",
"title": ""
},
{
"docid": "357e09114978fc0ac1fb5838b700e6ca",
"text": "Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance — a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "46980b89e76bc39bf125f63ed9781628",
"text": "In this paper, a design of miniaturized 3-way Bagley polygon power divider (BPD) is presented. The design is based on using non-uniform transmission lines (NTLs) in each arm of the divider instead of the conventional uniform ones. For verification purposes, a 3-way BPD is designed, simulated, fabricated, and measured. Besides suppressing the fundamental frequency's odd harmonics, a size reduction of almost 30% is achieved.",
"title": ""
},
{
"docid": "6784e31e2ec313698a622a7e78288f68",
"text": "Web-based technology is often the technology of choice for distance education given the ease of use of the tools to browse the resources on the Web, the relative affordability of accessing the ubiquitous Web, and the simplicity of deploying and maintaining resources on the WorldWide Web. Many sophisticated web-based learning environments have been developed and are in use around the world. The same technology is being used for electronic commerce and has become extremely popular. However, while there are clever tools developed to understand on-line customer’s behaviours in order to increase sales and profit, there is very little done to automatically discover access patterns to understand learners’ behaviour on web-based distance learning. Educators, using on-line learning environments and tools, have very little support to evaluate learners’ activities and discriminate between different learners’ on-line behaviours. In this paper, we discuss some data mining and machine learning techniques that could be used to enhance web-based learning environments for the educator to better evaluate the leaning process, as well as for the learners to help them in their learning endeavour.",
"title": ""
},
{
"docid": "3b9b49f8c2773497f8e05bff4a594207",
"text": "SSD (Single Shot Detector) is one of the state-of-the-art object detection algorithms, and it combines high detection accuracy with real-time speed. However, it is widely recognized that SSD is less accurate in detecting small objects compared to large objects, because it ignores the context from outside the proposal boxes. In this paper, we present CSSD–a shorthand for context-aware single-shot multibox object detector. CSSD is built on top of SSD, with additional layers modeling multi-scale contexts. We describe two variants of CSSD, which differ in their context layers, using dilated convolution layers (DiCSSD) and deconvolution layers (DeCSSD) respectively. The experimental results show that the multi-scale context modeling significantly improves the detection accuracy. In addition, we study the relationship between effective receptive fields (ERFs) and the theoretical receptive fields (TRFs), particularly on a VGGNet. The empirical results further strengthen our conclusion that SSD coupled with context layers achieves better detection results especially for small objects (+3.2%[email protected] on MSCOCO compared to the newest SSD), while maintaining comparable runtime performance.",
"title": ""
},
{
"docid": "fe3a2ef6ffc3e667f73b19f01c14d15a",
"text": "The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.",
"title": ""
}
] |
scidocsrr
|
3bad9c8d77eeb7d45ecc853380175289
|
Accurate, Fast Fall Detection Using Gyroscopes and Accelerometer-Derived Posture Information
|
[
{
"docid": "2841935e11a246a68d71cca27728f387",
"text": "Unintentional falls are a common cause of severe injury in the elderly population. By introducing small, non-invasive sensor motes in conjunction with a wireless network, the Ivy Project aims to provide a path towards more independent living for the elderly. Using a small device worn on the waist and a network of fixed motes in the home environment, we can detect the occurrence of a fall and the location of the victim. Low-cost and low-power MEMS accelerometers are used to detect the fall while RF signal strength is used to locate the person",
"title": ""
}
] |
[
{
"docid": "afae94714340326278c1629aa4ecc48c",
"text": "The purpose of this investigation was to examine the influence of upper-body static stretching and dynamic stretching on upper-body muscular performance. Eleven healthy men, who were National Collegiate Athletic Association Division I track and field athletes (age, 19.6 +/- 1.7 years; body mass, 93.7 +/- 13.8 kg; height, 183.6 +/- 4.6 cm; bench press 1 repetition maximum [1RM], 106.2 +/- 23.0 kg), participated in this study. Over 4 sessions, subjects participated in 4 different stretching protocols (i.e., no stretching, static stretching, dynamic stretching, and combined static and dynamic stretching) in a balanced randomized order followed by 4 tests: 30% of 1 RM bench throw, isometric bench press, overhead medicine ball throw, and lateral medicine ball throw. Depending on the exercise, test peak power (Pmax), peak force (Fmax), peak acceleration (Amax), peak velocity (Vmax), and peak displacement (Dmax) were measured. There were no differences among stretch trials for Pmax, Fmax, Amax, Vmax, or Dmax for the bench throw or for Fmax for the isometric bench press. For the overhead medicine ball throw, there were no differences among stretch trials for Vmax or Dmax. For the lateral medicine ball throw, there was no difference in Vmax among stretch trials; however, Dmax was significantly larger (p </= 0.05) for the static and dynamic condition compared to the static-only condition. In general, there was no short-term effect of stretching on upper-body muscular performance in young adult male athletes, regardless of stretch mode, potentially due to the amount of rest used after stretching before the performances. Since throwing performance was largely unaffected by static or dynamic upper-body stretching, athletes competing in the field events could perform upper-body stretching, if enough time were allowed before the performance. However, prior studies on lower-body musculature have demonstrated dramatic negative effects on speed and power. Therefore, it is recommended that a dynamic warm-up be used for the entire warm-up.",
"title": ""
},
{
"docid": "551d642efa547b9d8c089b8ecb9530fb",
"text": "Using piezoelectric materials to harvest energy from ambient vibrations to power wireless sensors has been of great interest over the past few years. Due to the power output of the piezoelectric materials is relatively low, rechargeable battery is considered as one kind of energy storage to accumulate the harvested energy for intermittent use. Piezoelectric harvesting circuits for rechargeable batteries have two schemes: non-adaptive and adaptive ones. A non-adaptive harvesting scheme includes a conventional diode bridge rectifier and a passive circuit. In recent years, several researchers have developed adaptive schemes for the harvesting circuit. Among them, the adaptive harvesting scheme by Ottman et al. is the most promising. This paper is aimed to quantify the performances of adaptive and non-adaptive schemes and to discuss their performance characteristics.",
"title": ""
},
{
"docid": "3776b7fdcd1460b60a18c87cd60b639e",
"text": "A sketch is a probabilistic data structure that is used to record frequencies of items in a multi-set. Various types of sketches have been proposed in literature and applied in a variety of fields, such as data stream processing, natural language processing, distributed data sets etc. While several variants of sketches have been proposed in the past, existing sketches still have a significant room for improvement in terms of accuracy. In this paper, we propose a new sketch, called Slim-Fat (SF) sketch, which has a significantly higher accuracy compared to prior art, a much smaller memory footprint, and at the same time achieves the same speed as the best prior sketch. The key idea behind our proposed SF-sketch is to maintain two separate sketches: a small sketch called Slim-subsketch and a large sketch called Fat-subsketch. The Slim-subsketch, stored in the fast memory (SRAM), enables fast and accurate querying. The Fat-subsketch, stored in the relatively slow memory (DRAM), is used to assist the insertion and deletion from Slim-subsketch. We implemented and extensively evaluated SF-sketch along with several prior sketches and compared them side by side. Our experimental results show that SF-sketch outperforms the most commonly used CM-sketch by up to 33.1 times in terms of accuracy.",
"title": ""
},
{
"docid": "5eab71f546a7dc8bae157a0ca4dd7444",
"text": "We introduce a new usability inspection method called HED (heuristic evaluation during demonstrations) for measuring and comparing usability of competing complex IT systems in public procurement. The method presented enhances traditional heuristic evaluation to include the use context, comprehensive view of the system, and reveals missing functionality by using user scenarios and demonstrations. HED also quantifies the results in a comparable way. We present findings from a real-life validation of the method in a large-scale procurement project of a healthcare and social welfare information system. We analyze and compare the performance of HED to other usability evaluation methods used in procurement. Based on the analysis HED can be used to evaluate the level of usability of an IT system during procurement correctly, comprehensively and efficiently.",
"title": ""
},
{
"docid": "e2a7ff093714cc6a0543816b3d7c08e9",
"text": "Microblogs such as Twitter reflect the general public’s reactions to major events. Bursty topics from microblogs reveal what events have attracted the most online attention. Although bursty event detection from text streams has been studied before, previous work may not be suitable for microblogs because compared with other text streams such as news articles and scientific publications, microblog posts are particularly diverse and noisy. To find topics that have bursty patterns on microblogs, we propose a topic model that simultaneously captures two observations: (1) posts published around the same time are more likely to have the same topic, and (2) posts published by the same user are more likely to have the same topic. The former helps find eventdriven posts while the latter helps identify and filter out “personal” posts. Our experiments on a large Twitter dataset show that there are more meaningful and unique bursty topics in the top-ranked results returned by our model than an LDA baseline and two degenerate variations of our model. We also show some case studies that demonstrate the importance of considering both the temporal information and users’ personal interests for bursty topic detection from microblogs.",
"title": ""
},
{
"docid": "910a3be33d479be4ed6e7e44a56bb8fb",
"text": "Support vector machine (SVM) is a supervised machine learning approach that was recognized as a statistical learning apotheosis for the small-sample database. SVM has shown its excellent learning and generalization ability and has been extensively employed in many areas. This paper presents a performance analysis of six types of SVMs for the diagnosis of the classical Wisconsin breast cancer problem from a statistical point of view. The classification performance of standard SVM (St-SVM) is analyzed and compared with those of the other modified classifiers such as proximal support vector machine (PSVM) classifiers, Lagrangian support vector machines (LSVM), finite Newton method for Lagrangian support vector machine (NSVM), Linear programming support vector machines (LPSVM), and smooth support vector machine (SSVM). The experimental results reveal that these SVM classifiers achieve very fast, simple, and efficient breast cancer diagnosis. The training results indicated that LSVM has the lowest accuracy of 95.6107 %, while St-SVM performed better than other methods for all performance indices (accuracy = 97.71 %) and is closely followed by LPSVM (accuracy = 97.3282). However, in the validation phase, the overall accuracies of LPSVM achieved 97.1429 %, which was superior to LSVM (95.4286 %), SSVM (96.5714 %), PSVM (96 %), NSVM (96.5714 %), and St-SVM (94.86 %). Value of ROC and MCC for LPSVM achieved 0.9938 and 0.9369, respectively, which outperformed other classifiers. The results strongly suggest that LPSVM can aid in the diagnosis of breast cancer.",
"title": ""
},
{
"docid": "95612aa090b77fc660279c5f2886738d",
"text": "Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions.",
"title": ""
},
{
"docid": "5519eea017d8f69804060f5e40748b1a",
"text": "The nonlinear Fourier transform is a transmission and signal processing technique that makes positive use of the Kerr nonlinearity in optical fibre channels. I will overview recent advances and some of challenges in this field.",
"title": ""
},
{
"docid": "db02adcb4f8ace13ab1f6f4a79bf7232",
"text": "This paper presents a spectral and time-frequency analysis of EEG signals recorded on seven healthy subjects walking on a treadmill at three different speeds. An accelerometer was placed on the head of the subjects in order to record the shocks undergone by the EEG electrodes during walking. Our results indicate that up to 15 harmonics of the fundamental stepping frequency may pollute EEG signals, depending on the walking speed and also on the electrode location. This finding may call into question some conclusions drawn in previous EEG studies where low-delta band (especially around 1 Hz, the fundamental stepping frequency) had been announced as being the seat of angular and linear kinematics control of the lower limbs during walk. Additionally, our analysis reveals that EEG and accelerometer signals exhibit similar time-frequency properties, especially in frequency bands extending up to 150 Hz, suggesting that previous conclusions claiming the activation of high-gamma rhythms during walking may have been drawn on the basis of insufficiently cleaned EEG signals. Our results are put in perspective with recent EEG studies related to locomotion and extensively discussed in particular by focusing on the low-delta and high-gamma bands.",
"title": ""
},
{
"docid": "41c3505d1341247972d99319cba3e7ba",
"text": "A 32-year-old pregnant woman in the 25th week of pregnancy underwent oral glucose tolerance screening at the diabetologist's. Later that day, she was found dead in her apartment possibly poisoned with Chlumsky disinfectant solution (solutio phenoli camphorata). An autopsy revealed chemical burns in the digestive system. The lungs and the brain showed signs of severe edema. The blood of the woman and fetus was analyzed using gas chromatography with mass spectrometry and revealed phenol, its metabolites (phenyl glucuronide and phenyl sulfate) and camphor. No ethanol was found in the blood samples. Both phenol and camphor are contained in Chlumsky disinfectant solution, which is used for disinfecting surgical equipment in healthcare facilities. Further investigation revealed that the deceased woman had been accidentally administered a disinfectant instead of a glucose solution by the nurse, which resulted in acute intoxication followed by the death of the pregnant woman and the fetus.",
"title": ""
},
{
"docid": "f231bff77a403fe18a445d894e9b93e5",
"text": "The geographical location of Internet IP addresses is important for academic research, commercial and homeland security applications. Thus, both commercial and academic databases and tools are available for mapping IP addresses to geographic locations. Evaluating the accuracy of these mapping services is complex since obtaining diverse large scale ground truth is very hard. In this work we evaluate mapping services using an algorithm that groups IP addresses to PoPs, based on structure and delay. This way we are able to group close to 100,000 IP addresses world wide into groups that are known to share a geo-location with high confidence. We provide insight into the strength and weaknesses of IP geolocation databases, and discuss their accuracy and encountered anomalies.",
"title": ""
},
{
"docid": "df0756ecff9f2ba84d6db342ee6574d3",
"text": "Security is becoming a critical part of organizational information systems. Intrusion detection system (IDS) is an important detection that is used as a countermeasure to preserve data integrity and system availability from attacks. Data mining is being used to clean, classify, and examine large amount of network data to correlate common infringement for intrusion detection. The main reason for using data mining techniques for intrusion detection systems is due to the enormous volume of existing and newly appearing network data that require processing. The amount of data accumulated each day by a network is huge. Several data mining techniques such as clustering, classification, and association rules are proving to be useful for gathering different knowledge for intrusion detection. This paper presents the idea of applying data mining techniques to intrusion detection systems to maximize the effectiveness in identifying attacks, thereby helping the users to construct more secure information systems.",
"title": ""
},
{
"docid": "89aa13fe76bf48c982e44b03acb0dd3d",
"text": "Stock trading strategy plays a crucial role in investment companies. However, it is challenging to obtain optimal strategy in the complex and dynamic stock market. We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return. 30 stocks are selected as our trading stocks and their daily prices are used as the training and trading market environment. We train a deep reinforcement learning agent and obtain an adaptive trading strategy. The agent’s performance is evaluated and compared with Dow Jones Industrial Average and the traditional min-variance portfolio allocation strategy. The proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the Sharpe ratio and cumulative returns.",
"title": ""
},
{
"docid": "fe1c70068301a379f6658775c6eb7ba0",
"text": "Fashion is a perpetual topic in human social life, and the mass has the penchant to emulate what large city residents and celebrities wear. Undeniably, New York City is such a bellwether large city with all kinds of fashion leadership. Consequently, to study what the fashion trends are during this year, it is very helpful to learn the fashion trends of New York City. Discovering fashion trends in New York City could boost many applications such as clothing recommendation and advertising. Does the fashion trend in the New York Fashion Show actually influence the clothing styles on the public? To answer this question, we design a novel system that consists of three major components: (1) constructing a large dataset from the New York Fashion Shows and New York street chic in order to understand the likely clothing fashion trends in New York, (2) utilizing a learning-based approach to discover fashion attributes as the representative characteristics of fashion trends, and (3) comparing the analysis results from the New York Fashion Shows and street-chic images to verify whether the fashion shows have actual influence on the people in New York City. Through the preliminary experiments over a large clothing dataset, we demonstrate the effectiveness of our proposed system, and obtain useful insights on fashion trends and fashion influence.",
"title": ""
},
{
"docid": "3d0103c34fcc6a65ad56c85a9fe10bad",
"text": "This paper approaches the problem of finding correspondences between images in which there are large changes in viewpoint, scale and illumination. Recent work has shown that scale-space ‘interest points’ may be found with good repeatability in spite of such changes. Furthermore, the high entropy of the surrounding image regions means that local descriptors are highly discriminative for matching. For descriptors at interest points to be robustly matched between images, they must be as far as possible invariant to the imaging process. In this work we introduce a family of features which use groups of interest points to form geometrically invariant descriptors of image regions. Feature descriptors are formed by resampling the image relative to canonical frames defined by the points. In addition to robust matching, a key advantage of this approach is that each match implies a hypothesis of the local 2D (projective) transformation. This allows us to immediately reject most of the false matches using a Hough transform. We reject remaining outliers using RANSAC and the epipolar constraint. Results show that dense feature matching can be achieved in a few seconds of computation on 1GHz Pentium III machines.",
"title": ""
},
{
"docid": "7c4c33097c12f55a08f8a7cc3634c5cb",
"text": "Pattern queries are widely used in complex event processing (CEP) systems. Existing pattern matching techniques, however, can provide only limited performance for expensive queries in real-world applications, which may involve Kleene closure patterns, flexible event selection strategies, and events with imprecise timestamps. To support these expensive queries with high performance, we begin our study by analyzing the complexity of pattern queries, with a focus on the fundamental understanding of which features make pattern queries more expressive and at the same time more computationally expensive. This analysis allows us to identify performance bottlenecks in processing those expensive queries, and provides key insights for us to develop a series of optimizations to mitigate those bottlenecks. Microbenchmark results show superior performance of our system for expensive pattern queries while most state-of-the-art systems suffer from poor performance. A thorough case study on Hadoop cluster monitoring further demonstrates the efficiency and effectiveness of our proposed techniques.",
"title": ""
},
{
"docid": "e33e3e46a4bcaaae32a5743672476cd9",
"text": "This paper is based on the notion of data quality. It includes correctness, completeness and minimality for which a notational framework is shown. In long living databases the maintenance of data quality is a rst order issue. This paper shows that even well designed and implemented information systems cannot guarantee correct data in any circumstances. It is shown that in any such system data quality tends to decrease and therefore some data correction procedure should be applied from time to time. One aspect of increasing data quality is the correction of data values. Characteristics of a software tool which supports this data value correction process are presented and discussed.",
"title": ""
},
{
"docid": "8f621c393298a81ef46c104a92297231",
"text": "A new method of free-form deformation, t-FFD, is proposed. An original shape of large-scale polygonal mesh or point-cloud is deformed by using a control mesh, which is constituted of a set of triangles with arbitrary topology and geometry, including the cases of disconnection or self-intersection. For modeling purposes, a designer can handle the shape directly or indirectly, and also locally or globally. This method works on a simple mapping mechanism. First, each point of the original shape is parametrized by the local coordinate system on each triangle of the control mesh. After modifying the control mesh, the point is mapped according to each modified triangle. Finally, the mapped locations are blended as a new position of the original point, then a smoothly deformed shape is achieved. Details of the t-FFD are discussed and examples are shown.",
"title": ""
},
{
"docid": "3c8e85a977df74c2fd345db9934d4699",
"text": "The abstract paragraph should be indented 1/2 inch (3 picas) on both left and righthand margins. Use 10 point type, with a vertical spacing of 11 points. The word Abstract must be centered, bold, and in point size 12. Two line spaces precede the abstract. The abstract must be limited to one paragraph.",
"title": ""
},
{
"docid": "9cbe52f8a135310d5da850c51b0a7d08",
"text": "Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, such policies can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach – supplemental to fine tuning on the real robot – to further benefit from parallel access to a simulator during training and reduce sample requirements on the real robot. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can optimize its behaviour for states commonly visited by the real-world agent.",
"title": ""
}
] |
scidocsrr
|
6b0478b5f8cc9425bc7a57cd949e47c6
|
Survey on Automatic Number Plate Recognition (ANR)
|
[
{
"docid": "12eff845ccb6e5cc2b2fbe74935aff46",
"text": "The study of this paper presents a new technique to use automatic number plate detection and recognition. This system plays a significant role throughout this busy world, owing to rise in use of vehicles day-by-day. Some of the applications of this software are automatic toll tax collection, unmanned parking slots, safety, and security. The current scenario happening in India is, people, break the rules of the toll and move away which can cause many serious issues like accidents. This system uses efficient algorithms to detect the vehicle number from real-time images. The system detects the license plate on the vehicle first and then captures the image of it. Vehicle number plate is localized and characters are segmented and further recognized with help of neural network. The system is designed for grayscale images so it detects the number plate regardless of its color. The resulting vehicle number plate is then compared with the available database of all vehicles which have been already registered by the users so as to come up with information about vehicle type and charge accordingly. The vehicle information such as date, toll amount is stored in the database to maintain the record.",
"title": ""
}
] |
[
{
"docid": "77564f157ea8ab43d6d9f95a212e7948",
"text": "We consider the problem of mining association rules on a shared-nothing multiprocessor. We present three algorithms that explore a spectrum of trade-oos between computation, communication, memory usage, synchronization, and the use of problem-speciic information. The best algorithm exhibits near perfect scaleup behavior, yet requires only minimal overhead compared to the current best serial algorithm.",
"title": ""
},
{
"docid": "9c698f09275057887803010fb6dc789e",
"text": "Type 2 diabetes is now a pandemic and shows no signs of abatement. In this Seminar we review the pathophysiology of this disorder, with particular attention to epidemiology, genetics, epigenetics, and molecular cell biology. Evidence is emerging that a substantial part of diabetes susceptibility is acquired early in life, probably owing to fetal or neonatal programming via epigenetic phenomena. Maternal and early childhood health might, therefore, be crucial to the development of effective prevention strategies. Diabetes develops because of inadequate islet β-cell and adipose-tissue responses to chronic fuel excess, which results in so-called nutrient spillover, insulin resistance, and metabolic stress. The latter damages multiple organs. Insulin resistance, while forcing β cells to work harder, might also have an important defensive role against nutrient-related toxic effects in tissues such as the heart. Reversal of overnutrition, healing of the β cells, and lessening of adipose tissue defects should be treatment priorities.",
"title": ""
},
{
"docid": "6e8b6b3f0bb2496d11961715e28d8b48",
"text": "The purpose of this paper is to provide a broad overview of the WITAS Unmanned Aerial Vehicle Project. The WITAS UAV project is an ambitious, long-term basic research project with the goal of developing technologies and functionalities necessary for the successful deployment of a fully autonomous UAV operating over diverse geographical terrain containing road and traffic networks. The project is multi-disciplinary in nature, requiring many different research competences, and covering a broad spectrum of basic research issues, many of which relate to current topics in artificial intelligence. A number of topics considered are knowledge representation issues, active vision systems and their integration with deliberative/reactive architectures, helicopter modeling and control, ground operator dialogue systems, actual physical platforms, and a number of simulation techniques.",
"title": ""
},
{
"docid": "62a51c43d4972d41d3b6cdfa23f07bb9",
"text": "To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards.",
"title": ""
},
{
"docid": "a0c1f145f423052b6e8059c5849d3e34",
"text": "Improved methods of assessment and research design have established a robust and causal association between stressful life events and major depressive episodes. The chapter reviews these developments briefly and attempts to identify gaps in the field and new directions in recent research. There are notable shortcomings in several important topics: measurement and evaluation of chronic stress and depression; exploration of potentially different processes of stress and depression associated with first-onset versus recurrent episodes; possible gender differences in exposure and reactivity to stressors; testing kindling/sensitization processes; longitudinal tests of diathesis-stress models; and understanding biological stress processes associated with naturally occurring stress and depressive outcomes. There is growing interest in moving away from unidirectional models of the stress-depression association, toward recognition of the effects of contexts and personal characteristics on the occurrence of stressors, and on the likelihood of progressive and dynamic relationships between stress and depression over time-including effects of childhood and lifetime stress exposure on later reactivity to stress.",
"title": ""
},
{
"docid": "8ce3fa727ff12f742727d5b80d8611b9",
"text": "Algorithmic approaches endow deep learning systems with implicit bias that helps them generalize even in over-parametrized settings. In this paper, we focus on understanding such a bias induced in learning through dropout, a popular technique to avoid overfitting in deep learning. For single hidden-layer linear neural networks, we show that dropout tends to make the norm of incoming/outgoing weight vectors of all the hidden nodes equal. In addition, we provide a complete characterization of the optimization landscape induced by dropout.",
"title": ""
},
{
"docid": "04500f0dbf48d3c1d8eb02ed43d46e00",
"text": "The coverage of a test suite is often used as a proxy for its ability to detect faults. However, previous studies that investigated the correlation between code coverage and test suite effectiveness have failed to reach a consensus about the nature and strength of the relationship between these test suite characteristics. Moreover, many of the studies were done with small or synthetic programs, making it unclear whether their results generalize to larger programs, and some of the studies did not account for the confounding influence of test suite size. In addition, most of the studies were done with adequate suites, which are are rare in practice, so the results may not generalize to typical test suites. \n We have extended these studies by evaluating the relationship between test suite size, coverage, and effectiveness for large Java programs. Our study is the largest to date in the literature: we generated 31,000 test suites for five systems consisting of up to 724,000 lines of source code. We measured the statement coverage, decision coverage, and modified condition coverage of these suites and used mutation testing to evaluate their fault detection effectiveness. \n We found that there is a low to moderate correlation between coverage and effectiveness when the number of test cases in the suite is controlled for. In addition, we found that stronger forms of coverage do not provide greater insight into the effectiveness of the suite. Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness.",
"title": ""
},
{
"docid": "a23aa9d2a0a100e805e3c25399f4f361",
"text": "Cases of poisoning by oleander (Nerium oleander) were observed in several species, except in goats. This study aimed to evaluate the pathological effects of oleander in goats. The experimental design used three goats per group: the control group, which did not receive oleander and the experimental group, which received leaves of oleander (50 mg/kg/day) for six consecutive days. On the seventh day, goats received 110 mg/kg of oleander leaves four times at one-hourly interval. A last dose of 330 mg/kg of oleander leaves was given subsequently. After the last dose was administered, clinical signs such as apathy, colic, vocalizations, hyperpnea, polyuria, and moderate rumen distention were observed. Electrocardiogram revealed second-degree atrioventricular block. Death occurred on an average at 92 min after the last dosing. Microscopic evaluation revealed renal necrosis at convoluted and collector tubules and slight myocardial degeneration was observed by unequal staining of cardiomyocytes. Data suggest that goats appear to respond to oleander poisoning in a manner similar to other species.",
"title": ""
},
{
"docid": "ffbcc6070b471bcf86dfb270d5fd2504",
"text": "This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.",
"title": ""
},
{
"docid": "23afac6bd3ed34fc0c040581f630c7bd",
"text": "Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly used facial expression databases. However, lack of a common evaluation protocol and lack of sufficient details to reproduce the reported individual results make it difficult to compare systems to each other. This in turn hinders the progress of the field. A periodical challenge in Facial Expression Recognition and Analysis would allow this comparison in a fair manner. It would clarify how far the field has come, and would allow us to identify new goals, challenges and targets. In this paper we present the first challenge in automatic recognition of facial expressions to be held during the IEEE conference on Face and Gesture Recognition 2011, in Santa Barbara, California. Two sub-challenges are defined: one on AU detection and another on discrete emotion detection. It outlines the evaluation protocol, the data used, and the results of a baseline method for the two sub-challenges.",
"title": ""
},
{
"docid": "5c9ea5fcfef7bac1513a79fd918d3194",
"text": "Elderly suffers from injuries or disabilities through falls every year. With a high likelihood of falls causing serious injury or death, falling can be extremely dangerous, especially when the victim is home-alone and is unable to seek timely medical assistance. Our fall detection systems aims to solve this problem by automatically detecting falls and notify healthcare services or the victim’s caregivers so as to provide help. In this paper, development of a fall detection system based on Kinect sensor is introduced. Current fall detection algorithms were surveyed and we developed a novel posture recognition algorithm to improve the specificity of the system. Data obtained through trial testing with human subjects showed a 26.5% increase in fall detection compared to control algorithms. With our novel detection algorithm, the system conducted in a simulated ward scenario can achieve up to 90% fall detection rate.",
"title": ""
},
{
"docid": "a208f2a2720313479773c00a74b1cbc6",
"text": "I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim’s Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600’000 Wikidata items and properties.",
"title": ""
},
{
"docid": "2ad3d7f4f10b323b177247362b7a9f63",
"text": "Spotify is a peer-assisted music streaming service that has gained worldwide popularity in the past few years. Until now, little has been published about user behavior in such services. In this paper, we study the user behavior in Spotify by analyzing a massive dataset collected between 2010 and 2011. Firstly, we investigate the system dynamics including session arrival patterns, playback arrival patterns, and daily variation of session length. Secondly, we analyze individual user behavior on both multiple and single devices. Our analysis reveals the favorite times of day for Spotify users. We also show the correlations between both the length and the downtime of successive user sessions on single devices. In particular, we conduct the first analysis of the device-switching behavior of a massive user base.",
"title": ""
},
{
"docid": "01a649c8115810c8318e572742d9bd00",
"text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.",
"title": ""
},
{
"docid": "3d06052330110c1a401c327af6140d43",
"text": "Many online videogames make use of characters controlled by both humans (avatar) and computers (agent) to facilitate game play. However, the level of agency a teammate shows potentially produces differing levels of social presence during play, which in turn may impact on the player experience. To better understand these effects, two experimental studies were conducted utilising cooperative multiplayer games (Left 4 Dead 2 and Rocket League). In addition, the effect of familiarity between players was considered. The trend across the two studies show that playing with another human is more enjoyable, and facilitates greater connection, cooperation, presence and positive mood than play with a computer agent. The implications for multiplayer game design is discussed.",
"title": ""
},
{
"docid": "28352c478552728dddf09a2486f6c63c",
"text": "Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special CMOS sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (PSF) that represents the path of the camera during integration. This PSF is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with a brief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in the scene move with different velocities.",
"title": ""
},
{
"docid": "be3bf1e95312cc0ce115e3aaac2ecc96",
"text": "This paper contributes a first study into how different human users deliver simultaneous control and feedback signals during human-robot interaction. As part of this work, we formalize and present a general interactive learning framework for online cooperation between humans and reinforcement learning agents. In many humanmachine interaction settings, there is a growing gap between the degrees-of-freedom of complex semi-autonomous systems and the number of human control channels. Simple human control and feedback mechanisms are required to close this gap and allow for better collaboration between humans and machines on complex tasks. To better inform the design of concurrent control and feedback interfaces, we present experimental results from a human-robot collaborative domain wherein the human must simultaneously deliver both control and feedback signals to interactively train an actor-critic reinforcement learning robot. We compare three experimental conditions: 1) human delivered control signals, 2) reward-shaping feedback signals, and 3) simultaneous control and feedback. Our results suggest that subjects provide less feedback when simultaneously delivering feedback and control signals and that control signal quality is not significantly diminished. Our data suggest that subjects may also modify when and how they provide feedback. Through algorithmic development and tuning informed by this study, we expect semi-autonomous actions of robotic agents can be better shaped by human feedback, allowing for seamless collaboration and improved performance in difficult interactive domains. University of Alberta, Dep. of Computing Science, Edmonton, Canada University of Alberta, Deps. of Medicine and Computing Science, Edmonton, Alberta, Canada. Correspondence to: Kory Mathewson <[email protected]>. Under review for the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the authors. Figure 1. Experimental configuration. One of the study participants with the Myo band on their right arm providing a control signal, while simultaneously providing feedback signals with their left hand. The Aldebaran Nao robot simulation is visible on the screen alongside experimental logging.",
"title": ""
},
{
"docid": "9fa46e75dc28961fe3ce6fadd179cff7",
"text": "Task-oriented repetitive movements can improve motor recovery in patients with neurological or orthopaedic lesions. The application of robotics can serve to assist, enhance, evaluate, and document neurological and orthopaedic rehabilitation. ARMin II is the second prototype of a robot for arm therapy applicable to the training of activities of daily living. ARMin II has a semi-exoskeletal structure with seven active degrees of freedom (two of them coupled), five adjustable segments to fit in with different patient sizes, and is equipped with position and force sensors. The mechanical structure, the actuators and the sensors of the robot are optimized for patient-cooperative control strategies based on impedance and admittance architectures. This paper describes the mechanical structure and kinematics of ARMin II.",
"title": ""
},
{
"docid": "349f53ceb63e415d2fb3e97410c0ef88",
"text": "The current prominence and future promises of the Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano Things (IoNT) are extensively reviewed and a summary survey report is presented. The analysis clearly distinguishes between IoT and IoE which are wrongly considered to be the same by many people. Upon examining the current advancement in the fields of IoT, IoE and IoNT, the paper presents scenarios for the possible future expansion of their applications.",
"title": ""
}
] |
scidocsrr
|
4bd646da50658547d1ab74cfe5d08613
|
Metaphors We Think With: The Role of Metaphor in Reasoning
|
[
{
"docid": "45082917d218ec53559c328dcc7c02db",
"text": "How are people able to think about things they have never seen or touched? We demonstrate that abstract knowledge can be built analogically from more experience-based knowledge. People's understanding of the abstract domain of time, for example, is so intimately dependent on the more experience-based domain of space that when people make an air journey or wait in a lunch line, they also unwittingly (and dramatically) change their thinking about time. Further, our results suggest that it is not sensorimotor spatial experience per se that influences people's thinking about time, but rather people's representations of and thinking about their spatial experience.",
"title": ""
},
{
"docid": "5ebd92444b69b2dd8e728de2381f3663",
"text": "A mind is a computer.",
"title": ""
},
{
"docid": "e39cafd4de135ccb17f7cf74cbd38a97",
"text": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.",
"title": ""
},
{
"docid": "c0fc94aca86a6aded8bc14160398ddea",
"text": "THE most persistent problems of recall all concern the ways in which past experiences and past reactions are utilised when anything is remembered. From a general point of view it looks as if the simplest explanation available is to suppose that when any specific event occurs some trace, or some group of traces, is made and stored up in the organism or in the mind. Later, an immediate stimulus re-excites the trace, or group of traces, and, provided a further assumption is made to the effect that the trace somehow carries with it a temporal sign, the re-excitement appears to be equivalent to recall. There is, of course, no direct evidence for such traces, but the assumption at first sight seems to be a very simple one, and so it has commonly been made.",
"title": ""
}
] |
[
{
"docid": "242686291812095c5320c1c8cae6da27",
"text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.",
"title": ""
},
{
"docid": "9adaeac8cedd4f6394bc380cb0abba6e",
"text": "The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, \"cocktail-party\" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the \"cocktail party problem\".",
"title": ""
},
{
"docid": "f14daee1ddf6bbf4f3d41fe6ef5fcdb6",
"text": "A characteristic that will distinguish successful manufacturing enterprises of the next millennium is agility: the ability to respond quickly, proactively, and aggressively to unpredictable change. The use of extended virtual enterprise Supply Chains (SC) to achieve agility is becoming increasingly prevalent. A key problem in constructing effective SCs is the lack of methods and tools to support the integration of processes and systems into shared SC processes and systems. This paper describes the architecture and concept of operation of the Supply Chain Process Design Toolkit (SCPDT), an integrated software system that addresses the challenge of seamless and efficient integration. The SCPDT enables the analysis and design of Supply Chain (SC) processes. SCPDT facilitates key SC process engineering tasks including 1) AS-IS process base-lining and assessment, 2) collaborative TO-BE process requirements definition, 3) SC process integration and harmonization, 4) TO-BE process design trade-off analysis, and 5) TO-BE process planning and implementation.",
"title": ""
},
{
"docid": "3874d10936841f59647d73f750537d96",
"text": "The number of studies comparing nutritional quality of restrictive diets is limited. Data on vegan subjects are especially lacking. It was the aim of the present study to compare the quality and the contributing components of vegan, vegetarian, semi-vegetarian, pesco-vegetarian and omnivorous diets. Dietary intake was estimated using a cross-sectional online survey with a 52-items food frequency questionnaire (FFQ). Healthy Eating Index 2010 (HEI-2010) and the Mediterranean Diet Score (MDS) were calculated as indicators for diet quality. After analysis of the diet questionnaire and the FFQ, 1475 participants were classified as vegans (n = 104), vegetarians (n = 573), semi-vegetarians (n = 498), pesco-vegetarians (n = 145), and omnivores (n = 155). The most restricted diet, i.e., the vegan diet, had the lowest total energy intake, better fat intake profile, lowest protein and highest dietary fiber intake in contrast to the omnivorous diet. Calcium intake was lowest for the vegans and below national dietary recommendations. The vegan diet received the highest index values and the omnivorous the lowest for HEI-2010 and MDS. Typical aspects of a vegan diet (high fruit and vegetable intake, low sodium intake, and low intake of saturated fat) contributed substantially to the total score, independent of the indexing system used. The score for the more prudent diets (vegetarians, semi-vegetarians and pesco-vegetarians) differed as a function of the used indexing system but they were mostly better in terms of nutrient quality than the omnivores.",
"title": ""
},
{
"docid": "03a39c98401fc22f1a376b9df66988dc",
"text": "A highly efficient wireless power transfer (WPT) system is required in many applications to replace the conventional wired system. The high temperature superconducting (HTS) wires are examined in a WPT system to increase the power-transfer efficiency (PTE) as compared with the conventional copper/Litz conductor. The HTS conductors are naturally can produce higher amount of magnetic field with high induced voltage to the receiving coil. Moreover, the WPT systems are prone to misalignment, which can cause sudden variation in the induced voltage and lead to rapid damage of the resonant capacitors connected in the circuit. Hence, the protection or elimination of resonant capacitor is required to increase the longevity of WPT system, but both the adoptions will operate the system in nonresonance mode. The absence of resonance phenomena in the WPT system will drastically reduce the PTE and correspondingly the future commercialization. This paper proposes an open bifilar spiral coils based self-resonant WPT method without using resonant capacitors at both the sides. The mathematical modeling and circuit simulation of the proposed system is performed by designing the transmitter coil using HTS wire and the receiver with copper coil. The three-dimensional modeling and finite element simulation of the proposed system is performed to analyze the current density at different coupling distances between the coil. Furthermore, the experimental results show the PTE of 49.8% under critical coupling with the resonant frequency of 25 kHz.",
"title": ""
},
{
"docid": "18136fba311484e901282c31c9d206fd",
"text": "New demands, coming from the industry 4.0 concept of the near future production systems have to be fulfilled in the coming years. Seamless integration of current technologies with new ones is mandatory. The concept of Cyber-Physical Production Systems (CPPS) is the core of the new control and automation distributed systems. However, it is necessary to provide the global production system with integrated architectures that make it possible. This work analyses the requirements and proposes a model-based architecture and technologies to make the concept a reality.",
"title": ""
},
{
"docid": "7ebaee3df1c8ee4bf1c82102db70f295",
"text": "Small cells such as femtocells overlaying the macrocells can enhance the coverage and capacity of cellular wireless networks and increase the spectrum efficiency by reusing the frequency spectrum assigned to the macrocells in a universal frequency reuse fashion. However, management of both the cross-tier and co-tier interferences is one of the most critical issues for such a two-tier cellular network. Centralized solutions for interference management in a two-tier cellular network with orthogonal frequency-division multiple access (OFDMA), which yield optimal/near-optimal performance, are impractical due to the computational complexity. Distributed solutions, on the other hand, lack the superiority of centralized schemes. In this paper, we propose a semi-distributed (hierarchical) interference management scheme based on joint clustering and resource allocation for femtocells. The problem is formulated as a mixed integer non-linear program (MINLP). The solution is obtained by dividing the problem into two sub-problems, where the related tasks are shared between the femto gateway (FGW) and femtocells. The FGW is responsible for clustering, where correlation clustering is used as a method for femtocell grouping. In this context, a low-complexity approach for solving the clustering problem is used based on semi-definite programming (SDP). In addition, an algorithm is proposed to reduce the search range for the best cluster configuration. For a given cluster configuration, within each cluster, one femto access point (FAP) is elected as a cluster head (CH) that is responsible for resource allocation among the femtocells in that cluster. The CH performs sub-channel and power allocation in two steps iteratively, where a low-complexity heuristic is proposed for the sub-channel allocation phase. Numerical results show the performance gains due to clustering in comparison to other related schemes. Also, the proposed correlation clustering scheme offers performance, which is close to that of the optimal clustering, with a lower complexity.",
"title": ""
},
{
"docid": "88afb98c0406d7c711b112fbe2a6f25e",
"text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8ca0edf4c51b0156c279fcbcb1941d2b",
"text": "The good fossil record of trilobite exoskeletal anatomy and ontogeny, coupled with information on their nonbiomineralized tissues, permits analysis of how the trilobite body was organized and developed, and the various evolutionary modifications of such patterning within the group. In several respects trilobite development and form appears comparable with that which may have characterized the ancestor of most or all euarthropods, giving studies of trilobite body organization special relevance in the light of recent advances in the understanding of arthropod evolution and development. The Cambrian diversification of trilobites displayed modifications in the patterning of the trunk region comparable with those seen among the closest relatives of Trilobita. In contrast, the Ordovician diversification of trilobites, although contributing greatly to the overall diversity within the clade, did so within a narrower range of trunk conditions. Trilobite evolution is consistent with an increased premium on effective enrollment and protective strategies, and with an evolutionary trade-off between the flexibility to vary the number of trunk segments and the ability to regionalize portions of the trunk. 401 A nn u. R ev . E ar th P la ne t. Sc i. 20 07 .3 5: 40 143 4. D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U N IV E R SI T Y O F C A L IF O R N IA R IV E R SI D E L IB R A R Y o n 05 /0 2/ 07 . F or p er so na l u se o nl y. ANRV309-EA35-14 ARI 20 March 2007 15:54 Cephalon: the anteriormost or head division of the trilobite body composed of a set of conjoined segments whose identity is expressed axially Thorax: the central portion of the trilobite body containing freely articulating trunk segments Pygidium: the posterior tergite of the trilobite exoskeleton containing conjoined segments INTRODUCTION The rich record of the diversity and development of the trilobite exoskeleton (along with information on the geological occurrence, nonbiomineralized tissues, and associated trace fossils of trilobites) provides the best history of any Paleozoic arthropod group. The retention of features that may have characterized the most recent common ancestor of all living arthropods, which have been lost or obscured in most living forms, provides insights into the nature of the evolutionary radiation of the most diverse metazoan phylum alive today. Studies of phylogenetic stem-group taxa, of which Trilobita provide a prominent example, have special significance in the light of renewed interest in arthropod evolution prompted by comparative developmental genetics. Although we cannot hope to dissect the molecular controls operative within trilobites, the evolutionary developmental biology (evo-devo) approach permits a fresh perspective from which to examine the contributions that paleontology can make to evolutionary biology, which, in the context of the overall evolutionary history of Trilobita, is the subject of this review. TRILOBITES: BODY PLAN AND ONTOGENY Trilobites were a group of marine arthropods that appeared in the fossil record during the early Cambrian approximately 520 Ma and have not been reported from rocks younger than the close of the Permian, approximately 250 Ma. Roughly 15,000 species have been described to date, and although analysis of the occurrence of trilobite genera suggests that the known record is quite complete (Foote & Sepkoski 1999), many new species and genera continue to be established each year. The known diversity of trilobites results from their strongly biomineralized exoskeletons, made of two layers of low magnesium calcite, which was markedly more durable than the sclerites of most other arthropods. Because the exoskeleton was rich in morphological characters and was the only body structure preserved in the vast majority of specimens, skeletal form has figured prominently in the biological interpretation of trilobites.",
"title": ""
},
{
"docid": "221c59b8ea0460dac3128e81eebd6aca",
"text": "STUDY DESIGN\nA prospective self-assessment analysis and evaluation of nutritional and radiographic parameters in a consecutive series of healthy adult volunteers older than 60 years.\n\n\nOBJECTIVES\nTo ascertain the prevalence of adult scoliosis, assess radiographic parameters, and determine if there is a correlation with functional self-assessment in an aged volunteer population.\n\n\nSUMMARY OF BACKGROUND DATA\nThere exists little data studying the prevalence of scoliosis in a volunteer aged population, and correlation between deformity and self-assessment parameters.\n\n\nMETHODS\nThere were 75 subjects in the study. Inclusion criteria were: age > or =60 years, no known history of scoliosis, and no prior spine surgery. Each subject answered a RAND 36-Item Health Survey questionnaire, a full-length anteroposterior standing radiographic assessment of the spine was obtained, and nutritional parameters were analyzed from blood samples. For each subject, radiographic, laboratory, and clinical data were evaluated. The study population was divided into 3 groups based on frontal plane Cobb angulation of the spine. Comparison of the RAND 36-Item Health Surveys data among groups of the volunteer population and with United States population benchmark data (age 65-74 years) was undertaken using an unpaired t test. Any correlation between radiographic, laboratory, and self-assessment data were also investigated.\n\n\nRESULTS\nThe mean age of the patients in this study was 70.5 years (range 60-90). Mean Cobb angle was 17 degrees in the frontal plane. In the study group, 68% of subjects met the definition of scoliosis (Cobb angle >10 degrees). No significant correlation was noted among radiographic parameters and visual analog scale scores, albumin, lymphocytes, or transferrin levels in the study group as a whole. Prevalence of scoliosis was not significantly different between males and females (P > 0.03). The scoliosis prevalence rate of 68% found in this study reveals a rate significantly higher than reported in other studies. These findings most likely reflect the targeted selection of an elderly group. Although many patients with adult scoliosis have pain and dysfunction, there appears to be a large group (such as the volunteers in this study) that has no marked physical or social impairment.\n\n\nCONCLUSIONS\nPrevious reports note a prevalence of adult scoliosis up to 32%. In this study, results indicate a scoliosis rate of 68% in a healthy adult population, with an average age of 70.5 years. This study found no significant correlations between adult scoliosis and visual analog scale scores or nutritional status in healthy, elderly volunteers.",
"title": ""
},
{
"docid": "9d2a73c8eac64ed2e1af58a5883229c3",
"text": "Tetyana Sydorenko Michigan State University This study examines the effect of input modality (video, audio, and captions, i.e., onscreen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8) saw video with audio and captions (VAC); group two (N = 9) saw video with audio (VA); group three (N = 9) saw video with captions (VC). All participants completed written and aural vocabulary tests and a final questionnaire.",
"title": ""
},
{
"docid": "428ecd77262fc57c5d0d19924a10f02a",
"text": "In an identity based encryption scheme, each user is identified by a unique identity string. An attribute based encryption scheme (ABE), in contrast, is a scheme in which each user is identified by a set of attributes, and some function of those attributes is used to determine decryption ability for each ciphertext. Sahai and Waters introduced a single authority attribute encryption scheme and left open the question of whether a scheme could be constructed in which multiple authorities were allowed to distribute attributes [SW05]. We answer this question in",
"title": ""
},
{
"docid": "d1756aa5f0885157bdad130d96350cd3",
"text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.",
"title": ""
},
{
"docid": "59f022a6e943f46e7b87213f651065d8",
"text": "This paper presents a procedure to design a robust switching strategy for the basic Buck-Boost DC-DC converter utilizing switched systems' theory. The converter dynamic is described in the framework of linear switched systems and then sliding-mode controller is developed to ensure the asymptotic stability of the desired equilibrium point for the switched system with constant external input. The inherent robustness of the sliding-mode switching rule leads to efficient regulation of the output voltage under load variations. Simulation results are presented to demonstrate the outperformance of the proposed method compared to a rival scheme in the literature.",
"title": ""
},
{
"docid": "d49fc093d43fa3cdf40ecfa3f670e165",
"text": "As a result of the increase in robots in various fields, the mechanical stability of specific robots has become an important subject of research. This study is concerned with the development of a two-wheeled inverted pendulum robot that can be applied to an intelligent, mobile home robot. This kind of robotic mechanism has an innately clumsy motion for stabilizing the robot’s body posture. To analyze and execute this robotic mechanism, we investigated the exact dynamics of the mechanism with the aid of 3-DOF modeling. By using the governing equations of motion, we analyzed important issues in the dynamics of a situation with an inclined surface and also the effect of the turning motion on the stability of the robot. For the experiments, the mechanical robot was constructed with various sensors. Its application to a two-dimensional floor environment was confirmed by experiments on factors such as balancing, rectilinear motion, and spinning motion.",
"title": ""
},
{
"docid": "a9fc5418c0b5789b02dd6638a1b61b5d",
"text": "As the homeostatis characteristics of nerve systems show, artificial neural networks are considered to be robust to variation of circuit components and interconnection faults. However, the tolerance of neural networks depends on many factors, such as the fault model, the network size, and the training method. In this study, we analyze the fault tolerance of fixed-point feed-forward deep neural networks for the implementation in CMOS digital VLSI. The circuit errors caused by the interconnection as well as the processing units are considered. In addition to the conventional and dropout training methods, we develop a new technique that randomly disconnects weights during the training to increase the error resiliency. Feed-forward deep neural networks for phoneme recognition are employed for the experiments.",
"title": ""
},
{
"docid": "1bdf1bfe81bf6f947df2254ae0d34227",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
},
{
"docid": "497e2ed6d39ad6c09210b17ce137c45a",
"text": "PURPOSE\nThe purpose of this study is to develop a model of Hospital Information System (HIS) user acceptance focusing on human, technological, and organizational characteristics for supporting government eHealth programs. This model was then tested to see which hospital type in Indonesia would benefit from the model to resolve problems related to HIS user acceptance.\n\n\nMETHOD\nThis study used qualitative and quantitative approaches with case studies at four privately owned hospitals and three government-owned hospitals, which are general hospitals in Indonesia. The respondents involved in this study are low-level and mid-level hospital management officers, doctors, nurses, and administrative staff who work at medical record, inpatient, outpatient, emergency, pharmacy, and information technology units. Data was processed using Structural Equation Modeling (SEM) and AMOS 21.0.\n\n\nRESULTS\nThe study concludes that non-technological factors, such as human characteristics (i.e. compatibility, information security expectancy, and self-efficacy), and organizational characteristics (i.e. management support, facilitating conditions, and user involvement) which have level of significance of p<0.05, significantly influenced users' opinions of both the ease of use and the benefits of the HIS. This study found that different factors may affect the acceptance of each user in each type of hospital regarding the use of HIS. Finally, this model is best suited for government-owned hospitals.\n\n\nCONCLUSIONS\nBased on the results of this study, hospital management and IT developers should have more understanding on the non-technological factors to better plan for HIS implementation. Support from management is critical to the sustainability of HIS implementation to ensure HIS is easy to use and provides benefits to the users as well as hospitals. Finally, this study could assist hospital management and IT developers, as well as researchers, to understand the obstacles faced by hospitals in implementing HIS.",
"title": ""
},
{
"docid": "2923e6f0760006b6a049a5afa297ca56",
"text": "Six years ago in this journal we discussed the work of Arthur T. Murray, who endeavored to explore artificial intelligence using the Forth programming language [1]. His creation, which he called MIND.FORTH, was interesting in its ability to understand English sentences in the form: subject-verb-object. It also had the capacity to learn new things and to form mental associations between recent experiences and older memories. In the intervening years, Mr. Murray has continued to develop his MIND.FORTH: he has translated it into Visual BASIC, PERL and Javascript, he has written a book [2] on the subject, and he maintains a wiki web site where anyone may suggest changes or extensions to his design [3]. MIND.FORTH is necessarily complex and opaque by virtue of its functionality; therefore it may be challenging for a newcomer to grasp. However, the more dedicated student will find much of value in this code. Murray himself has become quite a controversial figure.",
"title": ""
},
{
"docid": "369ed2ef018f9b6a031b58618f262dce",
"text": "Natural language processing has increasingly moved from modeling documents and words toward studying the people behind the language. This move to working with data at the user or community level has presented the field with different characteristics of linguistic data. In this paper, we empirically characterize various lexical distributions at different levels of analysis, showing that, while most features are decidedly sparse and non-normal at the message-level (as with traditional NLP), they follow the central limit theorem to become much more Log-normal or even Normal at the userand county-levels. Finally, we demonstrate that modeling lexical features for the correct level of analysis leads to marked improvements in common social scientific prediction tasks.",
"title": ""
}
] |
scidocsrr
|
140a5c004763a4a7ec90331f9a069f2c
|
LBP-based degraded document image binarization
|
[
{
"docid": "6779d20fd95ff4525404bdd4d3c7df4b",
"text": "A new method is presented for adaptive document image binarization, where the page is considered as a collection of subcomponents such as text, background and picture. The problems caused by noise, illumination and many source type-related degradations are addressed. Two new algorithms are applied to determine a local threshold for each pixel. The performance evaluation of the algorithm utilizes test images with ground-truth, evaluation metrics for binarization of textual and synthetic images, and a weight-based ranking procedure for the \"nal result presentation. The proposed algorithms were tested with images including di!erent types of document components and degradations. The results were compared with a number of known techniques in the literature. The benchmarking results show that the method adapts and performs well in each case qualitatively and quantitatively. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "ab10b0dd6a9161222a5db4275a698bc5",
"text": "BACKGROUND\nPseudomonas aeruginosa is a common cause of community-acquired and nosocomial-acquired pneumonia. The development of resistance of P. aeruginosa to antibiotics is increasing globally due to the overuse of antibiotics. This article examines, retrospectively, the antibiotic resistance in patients with community-acquired versus nosocomial-acquired pneumonia caused by P. aeruginosa or multidrug-resistant (MDR) P. aeruginosa.\n\n\nMETHODS\nData from patients with community-acquired and nosocomial-acquired pneumonia caused by P. aeruginosa and MDR P. aeruginosa were collected from the hospital charts at the HELIOS Clinic, Witten/Herdecke University, Wuppertal, Germany, between January 2004 and August 2014. An antibiogram was created from all study patients with community-acquired and nosocomial-acquired pneumonia caused by P. aeruginosa or MDR P. aeruginosa.\n\n\nRESULTS\nA total of 168 patients with mean age 68.1 ± 12.8 (113 [67.3% males and 55 [32.7%] females) were identified; 91 (54.2%) had community-acquired and 77 (45.8%) had nosocomial-acquired pneumonia caused by P. aeruginosa. Patients with community-acquired versus nosocomial-acquired pneumonia had a mean age of 66.4 ± 13.8 vs. 70.1 ± 11.4 years [59 vs. 54 (64.8% vs. 70.1%) males and 32 vs. 23 (35.2% vs. 29.9%) females]. They included 41 (24.4%) patients with pneumonia due to MDR P. aeruginosa: 27 (65.9%) community-acquired and 14 (34.1%) nosocomial-acquired cases. P. aeruginosa and MDR P. aeruginosa showed a very high resistance to fosfomycin (community-acquired vs. nosocomial-acquired) (81.0% vs. 84.2%; 0 vs. 85.7%). A similar resistance pattern was seen with ciprofloxacin (35.2% vs. 24.0%; 70.4% vs. 61.5%), levofloxacin (34.6% vs. 24.5%; 66.7% vs. 64.3%), ceftazidime (15.9% vs. 30.9; 33.3% vs. 61.5%), piperacillin (24.2% vs. 29.9%; 44.4% vs. 57.1%), imipenem (28.6% vs. 27.3%; 55.6% vs. 50.0%), piperacillin and tazobactam (23.1% vs. 28.6%; 44.4% vs. 50.0%), tobramycin (28.0% vs. 17.2%; 52.0% vs. 27.3%), gentamicin (26.4% vs. 18.2%; 44.4% vs. 21.4%), and meropenem (20.2% vs. 20.3%; 42.3% vs. 50.0%). An elevated resistance of P. aeruginosa and MDR P. aeruginosa was found for cefepime (11.1% vs. 23.3%; 25.9% vs. 50.0%), and amikacin (10.2% vs. 9.1%; 27.3% vs. 9.1%). Neither pathogen was resistant to colistin (P = 0.574).\n\n\nCONCLUSION\nWhile P. aeruginosa and MDR P. aeruginosa were resistant to a variety of commonly used antibiotics, they were not resistant to colistin in the few isolates recovered from patients with pneumonia.",
"title": ""
},
{
"docid": "1ca6e4d73ec39aebd7cafed73a322b77",
"text": "Manufacturing of powder-based products is a focus of increasing research in the recent years. The main reason is the lack of predictive process models connecting process parameters and material properties to product quality attributes. Moreover, the trend towards continuous manufacturing for the production of multiple pharmaceutical products increases the need for model-based process and product design. This work aims to identify the challenges in flowsheet model development and simulation for solid-based pharmaceutical processes and show its application and advantages for the integrated simulation and sensitivity analysis of two tablet manufacturing case studies: direct compaction and dry granulation. harmaceutical manufacturing",
"title": ""
},
{
"docid": "9eedeec21ab380c0466ed7edfe7c745d",
"text": "In this paper, we study the effect of using-grams (sequences of words of length n) for text categorization. We use an efficient algorithm for gener ating suchn-gram features in two benchmark domains, the 20 newsgroups data set and 21,578 REU TERS newswire articles. Our results with the rule learning algorithm R IPPER indicate that, after the removal of stop words, word sequences of length 2 or 3 are most useful. Using l o er sequences reduces classification performance.",
"title": ""
},
{
"docid": "972abdbc8667c24ae080eb2ffb7835e9",
"text": "Two important cues to female physical attractiveness are body mass index (BMI) and shape. In front view, it seems that BMI may be more important than shape; however, is it true in profile where shape cues may be stronger? There is also the question of whether men and women have the same perception of female physical attractiveness. Some studies have suggested that they do not, but this runs contrary to mate selection theory. This predicts that women will have the same perception of female attractiveness as men do. This allows them to judge their own relative value, with respect to their peer group, and match this value with the value of a prospective mate. To clarify these issues we asked 40 male and 40 female undergraduates to rate a set of pictures of real women (50 in front-view and 50 in profile) for attractiveness. BMI was the primary predictor of attractiveness in both front and profile, and the putative visual cues to BMI showed a higher degree of view-invariance than shape cues such as the waist-hip ratio (WHR). Consistent with mate selection theory, there were no significant differences in the rating of attractiveness by male and female raters.",
"title": ""
},
{
"docid": "f5eb1355dd1511bd647ec317d0336cd7",
"text": "Cloud Computing holds the potential to eliminate the requirements for setting up of highcost computing infrastructure for the IT-based solutions and services that the industry uses. It promises to provide a flexible IT architecture, accessible through internet for lightweight portable devices. This would allow many-fold increase in the capacity or capabilities of the existing and new software. In a cloud computing environment, the entire data reside over a set of networked resources, enabling the data to be accessed through virtual machines. Since these data centres may lie in any corner of the world beyond the reach and control of users, there are multifarious security and privacy challenges that need to be understood and taken care of. Also, one can never deny the possibility of a server breakdown that has been witnessed, rather quite often in the recent times. There are various issues that need to be dealt with respect to security and privacy in a cloud computing scenario. This extensive survey paper aims to elaborate and analyze the numerous unresolved issues threatening the Cloud computing adoption and diffusion affecting the various stake-holders linked to it.",
"title": ""
},
{
"docid": "398c506a2fbc1738d3d70f0507334096",
"text": "Preprocessing To prepare the images for the network, each of the training images was resized to 192 pixels by 192 pixels. To create additional training images, each of the training images was elastically distorted. For each of the original training images, four randomly generated elastic distorted images were generated and then resized down to 192 by 192 pixels. In addition, each training image was also rotated 90 degrees and additional elastic distortions were applied to the rotated images.",
"title": ""
},
{
"docid": "435307df5495b497ff9065e9d98af044",
"text": "Recent breakthroughs in word representation methods have generated a new spark of enthusiasm amidst the computational linguistic community, with methods such as Word2Vec have indeed shown huge potential to compress insightful information on words’ contextual meaning in lowdimensional vectors. While the success of these representations has mainly been harvested for traditional NLP tasks such as word prediction or sentiment analysis, recent studies have begun using these representations to track the dynamics of language and meaning over time. However, recent works have also shown these embeddings to be extremely noisy and training-set dependent, thus considerably restricting the scope and significance of this potential application. In this project, building upon the work presented by [1] in 2015, we thus propose to investigate ways of defining interpretable embeddings, and as well as alternative ways of assessing the dynamics of semantic changes so as to endow more statistical power to the analysis. 1 Problem Statement, Motivation and Prior Work The recent success of Neural-Network-generated word embeddings (word2vec, Glove, etc.) for traditional NLP tasks such as word prediction or text sentiment analysis has motivated the scientific community to use these representations as a way to analyze language itself. Indeed, if these low-dimensional word representations have proven to successfully carry both semantic and syntactic information, such a successful information compression could thus potentially be harvested to tackle more complex linguistic problems, such as monitoring language dynamics over time or space. In particular, in [1], [5], and [7], word embeddings are used to capture drifts of word meanings over time through the analysis of the temporal evolution of any given word’ closest neighbors. Other studies [6] use them to relate semantic shifts to geographical considerations. However, as highlighted by Hahn and Hellrich in [3], the inherent randomness of the methods used to encode these representations results in the high variability of any given word’s closest neighbors, thus considerably narrowing the statistical power of the study: how can we detect real semantic changes from the ambient jittering inherent to the embeddings’ representations? Can we try to provide a perhaps more interpretable and sounder basis of comparison than the neighborhoods to detect these changes? Building upon the methodology developed by Hamilton and al [1] to study language dynamics and the observations made by Hahn and Hellrich [3], we propose to tackle this problem from a mesoscopic scale: the intuition would be that if local neighborhoods are too unstable, we should thus look at information contained in the overall embedding matrix to build our statistical framework. In particular, a first idea is that we should try to evaluate the existence of a potentially ”backbone” structure of the embeddings. Indeed, it would seem intuitive that if certain words –such as “gay” or “asylum” (as observed by Hamilton et al) have exhibited important drifts in meaning throughout the 20th century, another large set of words – such as “food”,“house” or “people” – have undergone very little semantic change over time. As such, we should expect the relative distance between atoms in this latter set (as defined by the acute angle between their respective embeddings) to remain relatively constant from decade to decade. Hence, one could try to use this stable backbone graph as a way to triangulate the movement of the other word vectors over time, thus hopefully inducing more interpretable changes over time. Such an approach could also be used to answer the question of assessing the validity of our embeddings for linguistic purposes: how well do these embeddings capture similarity and nuances between words? A generally",
"title": ""
},
{
"docid": "d931f6f9960e8688c2339a27148efe74",
"text": "Most knowledge on the Web is encoded as natural language text, which is convenient for human users but very difficult for software agents to understand. Even with increased use of XML-encoded information, software agents still need to process the tags and literal symbols using application dependent semantics. The Semantic Web offers an approach in which knowledge can be published by and shared among agents using symbols with a well defined, machine-interpretable semantics. The Semantic Web is a “web of data” in that (i) both ontologies and instance data are published in a distributed fashion; (ii) symbols are either ‘literals’ or universally addressable ‘resources’ (URI references) each of which comes with unique semantics; and (iii) information is semi-structured. The Friend-of-a-Friend (FOAF) project (http://www.foafproject.org/) is a good application of the Semantic Web in which users publish their personal profiles by instantiating the foaf:Personclass and adding various properties drawn from any number of ontologies. The Semantic Web’s distributed nature raises significant data access problems – how can an agent discover, index, search and navigate knowledge on the Semantic Web? Swoogle (Dinget al. 2004) was developed to facilitate webscale semantic web data access by providing these services to both human and software agents. It focuses on two levels of knowledge granularity: URI based semantic web vocabulary andsemantic web documents (SWDs), i.e., RDF and OWL documents encoded in XML, NTriples or N3. Figure 1 shows Swoogle’s architecture. The discovery component automatically discovers and revisits SWDs using a set of integrated web crawlers. The digest component computes metadata for SWDs and semantic web terms (SWTs) as well as identifies relations among them, e.g., “an SWD instantiates an SWT class”, and “an SWT class is the domain of an SWT property”. The analysiscomponent uses cached SWDs and their metadata to derive analytical reports, such as classifying ontologies among SWDs and ranking SWDs by their importance. The s rvicecomponent sup-",
"title": ""
},
{
"docid": "ba39f3a2b5ed9af6cdf4530176039e05",
"text": "Survival analysis can be applied to build models fo r time to default on debt. In this paper we report an application of survival analysis to model default o n a large data set of credit card accounts. We exp lore the hypothesis that probability of default is affec ted by general conditions in the economy over time. These macroeconomic variables cannot readily be inc luded in logistic regression models. However, survival analysis provides a framework for their in clusion as time-varying covariates. Various macroeconomic variables, such as interest rate and unemployment rate, are included in the analysis. We show that inclusion of these indicators improves model fit and affects probability of default yielding a modest improvement in predictions of def ault on an independent test set.",
"title": ""
},
{
"docid": "de6e139d0b5dc295769b5ddb9abcc4c6",
"text": "1 Abd El-Moniem M. Bayoumi is a graduate TA at the Department of Computer Engineering, Cairo University. He received his BS degree in from Cairo University in 2009. He is currently an RA, working for a research project on developing an innovative revenue management system for the hotel business. He was awarded the IEEE CIS Egypt Chapter’s special award for his graduation project in 2009. Bayoumi is interested to research in machine learning and business analytics; and he is currently working on his MS on stock market prediction.",
"title": ""
},
{
"docid": "149ffd270f39a330f4896c7d3aa290be",
"text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.",
"title": ""
},
{
"docid": "683e496bd08fe3a55c63ba8788481184",
"text": "Ubicomp products have become more important in providing emotional experiences as users increasingly assimilate these products into their everyday lives. In this paper, we explored a new design perspective by applying a pet dog analogy to support emotional experience with ubicomp products. We were inspired by pet dogs, which are already intimate companions to humans and serve essential emotional functions in daily live. Our studies involved four phases. First, through our literature review, we articulated the key characteristics of pet dogs that apply to ubicomp products. Secondly, we applied these characteristics to a design case, CAMY, a mixed media PC peripheral with a camera. Like a pet dog, it interacts emotionally with a user. Thirdly, we conducted a user study with CAMY, which showed the effects of pet-like characteristics on users' emotional experiences, specifically on intimacy, sympathy, and delightedness. Finally, we presented other design cases and discussed the implications of utilizing a pet dog analogy to advance ubicomp systems for improved user experiences.",
"title": ""
},
{
"docid": "0db3dab283d054f806780b2251e50c60",
"text": "This article introduces a novel representation for three-dimensional (3D) objects in terms of local affine-invariant descriptors of their images and the spatial relationships between the corresponding surface patches. Geometric constraints associated with different views of the same patches under affine projection are combined with a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true 3D affine and Euclidean models from multiple unregistered images, as well as their recognition in photographs taken from arbitrary viewpoints. The proposed approach does not require a separate segmentation stage, and it is applicable to highly cluttered scenes. Modeling and recognition results are presented.",
"title": ""
},
{
"docid": "9233195d4f25e21a4de1a849d8f47932",
"text": "For the first time, the DRAM device composed of 6F/sup 2/ open-bit-line memory cell with 80nm feature size is developed. Adopting 6F/sup 2/ scheme instead of customary 8F/sup 2/ scheme made it possible to reduce chip size by up to nearly 20%. However, converting the cell scheme to 6F/sup 2/ accompanies some difficulties such as decrease of the cell capacitance, and more compact core layout. To overcome this strict obstacles which are originally stemming from the conversion of cell scheme to 6F/sup 2/, TIT structure with AHO (AfO/AlO/AfO) is adopted for higher cell capacitance, and bar-type contact is adopted for adjusting to compact core layout. Moreover, to lower cell V/sub th/ so far as suitable for characteristic of low power operation, the novel concept, S-RCAT (sphere-shaped-recess-channel-array transistor) is introduced. It is the improved scheme of RCAT used in 8F/sup 2/ scheme. By adopting S-RCAT, V/sub th/ can be lowered, SW, DIBL are improved. Additionally, data retention time characteristic can be improved.",
"title": ""
},
{
"docid": "90ca045940f1bc9517c64bd93fd33d37",
"text": "We present a new algorithm for encoding low dynamic range images into fixed-rate texture compression formats. Our approach provides orders of magnitude improvements in speed over existing publicly-available compressors, while generating high quality results. The algorithm is applicable to any fixed-rate texture encoding scheme based on Block Truncation Coding and we use it to compress images into the OpenGL BPTC format. The underlying technique uses an axis-aligned bounding box to estimate the proper partitioning of a texel block and performs a generalized cluster fit to compute the endpoint approximation. This approximation can be further refined using simulated annealing. The algorithm is inherently parallel and scales with the number of processor cores. We highlight its performance on low-frequency game textures and the high frequency Kodak Test Image Suite.",
"title": ""
},
{
"docid": "87eafc3005bc936c0d6765285295f37e",
"text": "A microbial fuel cell (MFC) is a bioreactor that converts chemical energy in the chemical bonds in organic compounds to electrical energy through catalytic reactions of microorganisms under anaerobic conditions. It has been known for many years that it is possible to generate electricity directly by using bacteria to break down organic substrates. The recent energy crisis has reinvigorated interests in MFCs among academic researchers as a way to generate electric power or hydrogen from biomass without a net carbon emission into the ecosystem. MFCs can also be used in wastewater treatment facilities to break down organic matters. They have also been studied for applications as biosensors such as sensors for biological oxygen demand monitoring. Power output and Coulombic efficiency are significantly affected by the types of microbe in the anodic chamber of an MFC, configuration of the MFC and operating conditions. Currently, real-world applications of MFCs are limited because of their low power density level of several thousand mW/m2. Efforts are being made to improve the performance and reduce the construction and operating costs of MFCs. This article presents a critical review on the recent advances in MFC research with emphases on MFC configurations and performances.",
"title": ""
},
{
"docid": "021243b584395d190e191e0713fe4a5c",
"text": "Convolutional neural networks (CNNs) have achieved remarkable performance in a wide range of computer vision tasks, typically at the cost of massive computational complexity. The low speed of these networks may hinder real-time applications especially when computational resources are limited. In this paper, an efficient and effective approach is proposed to accelerate the test-phase computation of CNNs based on low-rank and group sparse tensor decomposition. Specifically, for each convolutional layer, the kernel tensor is decomposed into the sum of a small number of low multilinear rank tensors. Then we replace the original kernel tensors in all layers with the approximate tensors and fine-tune the whole net with respect to the final classification task using standard backpropagation. \\\\ Comprehensive experiments on ILSVRC-12 demonstrate significant reduction in computational complexity, at the cost of negligible loss in accuracy. For the widely used VGG-16 model, our approach obtains a 6.6$\\times$ speed-up on PC and 5.91$\\times$ speed-up on mobile device of the whole network with less than 1\\% increase on top-5 error.",
"title": ""
},
{
"docid": "7c16d1675e6a041117ffaa5a7a29fe40",
"text": "Pakistan hosts a competitive and fluid telecommunication market and for a company to sustain, create customer value and increase economic efficiency, it needs to better understand its customers. The purpose of clustering or customer segmentation is to deliver actionable results for marketing, product development and business planning. In this paper, we focus on customer segmentation using clustering algorithms on real data of a telecommunication company in Pakistan. After choosing appropriate attributes for clustering, we used the two-step clustering algorithm in order to create different customer segments. Moreover, the insights obtained from each segment were analyzed before suggesting marketing strategies for up-selling and better targeted campaigns.",
"title": ""
},
{
"docid": "0a7601bd874d898386a9ecf23634fc75",
"text": "The combined effort of the Universiti Kebangsaan Malaysia and the Kyushu Institute of Technology (KYUTECH) to develop a small-satellite antenna is presented in this paper. Microstrip antennas offer an ideal solution to satellite communication requirements due to their light weight and low profile. In this paper, a compact single-layer coaxial-probe-fed circularly polarized high-gain patch antenna designed for HORYU-IV nanosatellite S-band communication is presented. HORYU-IV aims to acquire data about high-voltage discharge phenomena in low Earth orbit (LEO). This will enhance the understanding of satellite charging, and overall satellite reliability can be improved for future high-power space programs. The proposed antenna consists of four asymmetric V-shaped slits, i.e., one at each corner of a rectangular patch, and a parasitic rectangular strip. The proposed antenna achieves a sufficient beamwidth for LEO satellite application and less than 3-dB axial ratio for the entire field of view. A prototype of the antenna was developed with a 1.57-mm-thick singlelayer Rogers substrate with a relative permittivity of 2.2, and the measured results are consistent with the simulation.",
"title": ""
},
{
"docid": "7db1b370d0e14e80343cbc7718bbb6c9",
"text": "T free-riding problem occurs if the presales activities needed to sell a product can be conducted separately from the actual sale of the product. Intuitively, free riding should hurt the retailer that provides that service, but the author shows analytically that free riding benefits not only the free-riding retailer, but also the retailer that provides the service when customers are heterogeneous in terms of their opportunity costs for shopping. The service-providing retailer has a postservice advantage, because customers who have resolved their matching uncertainty through sales service incur zero marginal shopping cost if they purchase from the service-providing retailer rather than the free-riding retailer. Moreover, allowing free riding gives the free rider less incentive to compete with the service provider on price, because many customers eventually will switch to it due to their own free riding. In turn, this induced soft strategic response enables the service provider to charge a higher price and enjoy the strictly positive profit that otherwise would have been wiped away by head-to-head price competition. Therefore, allowing free riding can be regarded as a necessary mechanism that prevents an aggressive response from another retailer and reduces the intensity of price competition.",
"title": ""
}
] |
scidocsrr
|
b1e86768a0747ec62399398033faf938
|
Autonomous vehicle navigation using evolutionary reinforcement learning
|
[
{
"docid": "9eba7766cfd92de0593937defda6ce64",
"text": "A basic classifier system, ZCS, is presented that keeps much of Holland's original framework but simplifies it to increase understandability and performance. ZCS's relation to Q-learning is brought out, and their performances compared in environments of two difficulty levels. Extensions to ZCS are proposed for temporary memory, better action selection, more efficient use of the genetic algorithm, and more general classifier representation.",
"title": ""
}
] |
[
{
"docid": "8dbddd1ebb995ec4b2cc5ad627e91f61",
"text": "Pac-Man (and variant) computer games have received some recent attention in artificial intelligence research. One reason is that the game provides a platform that is both simple enough to conduct experimental research and complex enough to require non-trivial strategies for successful game-play. This paper describes an approach to developing Pac-Man playing agents that learn game-play based on minimal onscreen information. The agents are based on evolving neural network controllers using a simple evolutionary algorithm. The results show that neuroevolution is able to produce agents that display novice playing ability, with a minimal amount of onscreen information, no knowledge of the rules of the game and a minimally informative fitness function. The limitations of the approach are also discussed, together with possible directions for extending the work towards producing better Pac-Man playing agents",
"title": ""
},
{
"docid": "adc310c02471d8be579b3bfd32c33225",
"text": "In this work, we put forward the notion of Worry-Free Encryption. This allows Alice to encrypt confidential information under Bob's public key and send it to him, without having to worry about whether Bob has the authority to actually access this information. This is done by encrypting the message under a hidden access policy that only allows Bob to decrypt if his credentials satisfy the policy. Our notion can be seen as a functional encryption scheme but in a public-key setting. As such, we are able to insist that even if the credential authority is corrupted, it should not be able to compromise the security of any honest user.\n We put forward the notion of Worry-Free Encryption and show how to achieve it for any polynomial-time computable policy, under only the assumption that IND-CPA public-key encryption schemes exist. Furthermore, we construct CCA-secure Worry-Free Encryption, efficiently in the random oracle model, and generally (but inefficiently) using simulation-sound non-interactive zero-knowledge proofs.",
"title": ""
},
{
"docid": "fec345f9a3b2b31bd76507607dd713d4",
"text": "E-government is a relatively new branch of study within the Information Systems (IS) field. This paper examines the factors influencing adoption of e-government services by citizens. Factors that have been explored in the extant literature present inadequate understanding of the relationship that exists between ‘adopter characteristics’ and ‘behavioral intention’ to use e-government services. These inadequacies have been identified through a systematic and thorough review of empirical studies that have considered adoption of government to citizen (G2C) electronic services by citizens. This paper critically assesses key factors that influence e-government service adoption; reviews limitations of the research methodologies; discusses the importance of 'citizen characteristics' and 'organizational factors' in adoption of e-government services; and argues for the need to examine e-government service adoption in the developing world.",
"title": ""
},
{
"docid": "0e4cd983047da489ee3b28511aea573a",
"text": "While bottom-up and top-down processes have shown effectiveness during predicting attention and eye fixation maps on images, in this paper, inspired by the perceptual organization mechanism before attention selection, we propose to utilize figure-ground maps for the purpose. So as to take both pixel-wise and region-wise interactions into consideration when predicting label probabilities for each pixel, we develop a context-aware model based on multiple segmentation to obtain final results. The MIT attention dataset [14] is applied finally to evaluate both new features and model. Quantitative experiments demonstrate that figure-ground cues are valid in predicting attention selection, and our proposed model produces improvements over baseline method.",
"title": ""
},
{
"docid": "72782fdcc61d1059bce95fe4e7872f5b",
"text": "ÐIn object prototype learning and similar tasks, median computation is an important technique for capturing the essential information of a given set of patterns. In this paper, we extend the median concept to the domain of graphs. In terms of graph distance, we introduce the novel concepts of set median and generalized median of a set of graphs. We study properties of both types of median graphs. For the more complex task of computing generalized median graphs, a genetic search algorithm is developed. Experiments conducted on randomly generated graphs demonstrate the advantage of generalized median graphs compared to set median graphs and the ability of our genetic algorithm to find approximate generalized median graphs in reasonable time. Application examples with both synthetic and nonsynthetic data are shown to illustrate the practical usefulness of the concept of median graphs. Index TermsÐMedian graph, graph distance, graph matching, genetic algorithm,",
"title": ""
},
{
"docid": "42366db7e9c27dd30b64557e2c413bec",
"text": "This paper discusses plasma-assisted conversion of pyrolysis gas (pyrogas) fuel to synthesis gas (syngas, combination of hydrogen and carbon monoxide). Pyrogas is a product of biomass, municipal wastes, or coal-gasification process that usually contains hydrogen, carbon monoxide, carbon dioxide, water, unreacted light and heavy hydrocarbons, and tar. These hydrocarbons diminish the fuel value of pyrogas, thereby necessitating the need for the conversion of the hydrocarbons. Various conditions and reforming reactions were considered for the conversion of pyrogas into syngas. Nonequilibrium plasma reforming is an effective homogenous process which makes use of catalysts unnecessary for fuel reforming. The effectiveness of gliding arc plasma as a nonequilibrium plasma discharge is demonstrated in the fuel reforming reaction processes with the aid of a specially designed low current device also known as gliding arc plasma reformer. Experimental results obtained focus on yield, molar concentration, carbon balance, and enthalpy at different conditions.",
"title": ""
},
{
"docid": "a5cc8b6df2dec42d730a0c0ec45d64bb",
"text": "The Clock Drawing Test (CDT) is a rapid, inexpensive, and popular neuropsychological screening tool for cognitive conditions. The Digital Clock Drawing Test (dCDT) uses novel software to analyze data from a digitizing ballpoint pen that reports its position with considerable spatial and temporal precision, making possible the analysis of both the drawing process and final product. We developed methodology to analyze pen stroke data from these drawings, and computed a large collection of features which were then analyzed with a variety of machine learning techniques. The resulting scoring systems were designed to be more accurate than the systems currently used by clinicians, but just as interpretable and easy to use. The systems also allow us to quantify the tradeoff between accuracy and interpretability. We created automated versions of the CDT scoring systems currently used by clinicians, allowing us to benchmark our models, which indicated that our machine learning models substantially outperformed the existing scoring systems. 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY, USA. Copyright by the author(s). 1. Background The Clock Drawing Test (CDT) a simple pencil and paper test has been used as a screening tool to differentiate normal individuals from those with cognitive impairment. The test takes less than two minutes, is easily administered and inexpensive, and is deceptively simple: it asks subjects first to draw an analog clock-face showing 10 minutes after 11 (the command clock), then to copy a pre-drawn clock showing the same time (the copy clock). It has proven useful in helping to diagnose cognitive dysfunction associated with neurological disorders such as Alzheimer’s disease, Parkinson’s disease, and other dementias and conditions. (Freedman et al., 1994; Grande et al., 2013). The CDT is often used by neuropsychologists, neurologists and primary care physicians as part of a general screening for cognitive change (Strub et al., 1985). For the past decade, neuropsychologists in our group have been administering the CDT using a commercially available digitizing ballpoint pen (the DP-201 from Anoto, Inc.) that records its position on the page with considerable spatial (±0.005 cm) and temporal (13ms) accuracy, enabling the analysis of not only the end product – the drawing – but also the process that produced it, including all of the subject’s movements and hesitations. The resulting test is called the digital Clock Drawing Test (dCDT). Figure 1 and Figure 2 illustrate clock drawings from a subject in the memory impairment group, and a subject diagnosed with Parkinson’s disease, respectively. 61 ar X iv :1 60 6. 07 16 3v 1 [ st at .M L ] 2 3 Ju n 20 16 Interpretable Machine Learning Models for the Digital Clock Drawing Test Figure 1. Example Alzheimer’s Disease clock from our dataset. Figure 2. Example Parkinson’s Disease clock from our dataset. 2. Existing Scoring Systems There are a variety of methods for scoring the CDT, varying in complexity and the types of features they use. They often take the form of systems that add or subtract points based on features of the clock, and often have the additional constraint that the (n + 1) feature matters only if the previous n features have been satisfied, adding a higher level of complexity in understanding the resulting score. A threshold is then used to decide whether the test gives evidence of impairment. While the scoring system are typically short and understandable by a human, the features they attend to are often expressed in relatively vague terms, leading to potentially lower inter-rater reliability. For example, the Rouleau (Rouleau et al., 1992) scoring system, shown in Table 1, asks whether there are “slight errors in the placement of the hands” and whether “the clockface is present without gross distortion”. In order to benchmark our models for the dCDT against existing scoring systems, we needed to create automated versions of them so that we could apply them to our set of clocks. We did this for seven of the most widely used existing scoring systems (Souillard-Mandar et al., 2015) by specifying the computations to be done in enough detail that they could be expressed unambiguously in code. As maximum: 10 points 1. Integrity of the clockface (maximum: 2 points) 2: Present without gross distortion 1: Incomplete or some distortion 0: Absent or totally inappropriate 2. Presence and sequencing of the numbers (maximum: 4 points) 4: All present in the right order and at most minimal error in the spatial arrangement 3: All present but errors in spatial arrangement 2: Numbers missing or added but no gross distortions of the remaining numbers Numbers placed in counterclockwise direction Numbers all present but gross distortion in spatial layout 1: Missing or added numbers and gross spatial distortions 0: Absence or poor representation of numbers 3. Presence and placement of the hands (maximum: 4 points) 4: Hands are in correct position and the size difference is respected 3: Sight errors in the placement of the hands or no representation of size difference between the hands 2: Major errors in the placement of the hands (significantly out of course including 10 to 11) 1: Only one hand or poor representation of two hands 0: No hands or perseveration on hands Table 1. Original Rouleau scoring system (Rouleau et al., 1992) one example, we translated “slight errors in the placement of the hands” to “exactly two hands present AND at most one hand with a pointing error of between 1 and 2 degrees”, where the i are thresholds to be optimized. We refer to these new models as operationalized scoring systems. 3. An Interpretable Machine Learning Approach 3.1. Stroke-Classification and Feature Computation The raw data from the pen is analyzed using novel software developed for this task (Davis et al., 2014; Davis & Penney, 2014; Cohen et al., 2014). An algorithm classifies the pen strokes as one or another of the clock drawing symbols (i.e. clockface, hands, digits, noise); stroke classification errors are easily corrected by human scorer using a simple drag-and-drop interface. Figure 3 shows a screenshot of the system after the strokes in the command clock from Figure 1 have been classified. Using these symbol-classified strokes, we compute a large collection of features from the test, measuring geometric and temporal properties in a single clock, both clocks, and 62 Interpretable Machine Learning Models for the Digital Clock Drawing Test Figure 3. Classified command clock from Figure 1 differences between them. Example features include: • The number of strokes; the total ink length; the time it took to draw; and the pen speed for various clock components; timing information is used to measure how quickly different parts of the clock were drawn; latencies between components. • The length of the major and minor axis and eccentricity of the fitted ellipse; largest angular gaps in the clockface; distance and angular difference between starting and ending points of the clock face. • Digits that are missing or repeated; the height and width of digit bounding boxes. • Omissions or repetitions of hands; angular error from their correct angle; the hour hand to minute hand size ratio; the presence and direction of arrowheads. We also selected a subset of our features that we believe are both particularly understandable and that have values easily verifiable by clinicians. We expect, for example, that there would be wide agreement on whether a number is present, whether hands have arrowheads on them, whether there are easily noticeable noise strokes, or if the total drawing time particularly high or low. We call this subset the Simplest Features. 3.2. Traditional Machine Learning We focused on three categories of cognitive impairment, for which we had a total of 453 tests: memory impairment disorders (MID) consisting of Alzheimer’s disease and amnestic mild cognitive impairment (aMCI); vascular cognitive disorders (VCD) consisting of vascular dementia, mixed MCI and vascular cognitive impairment; and Parkinson’s disease (PD). Our set of 406 healthy controls (HC) comes from people who have been longitudinally studied as participants in the Framingham Heart Study. Our task is screening: we want to distinguish between healthy and one of the three categories of cognitive impairment, as well as a group screening, distinguish between healthy and all three conditions together. We started our machine learning work by applying state-ofthe-art machine learning methods to the set of all features. We generated classifiers using multiple machine learning methods, including CART (Breiman et al., 1984), C4.5 (Quinlan, 1993), SVM with gaussian kernels (Joachims, 1998), random forests (Breiman, 2001), boosted decision trees (Friedman, 2001), and regularized logistic regression (Fan et al., 2008). We used stratified cross-validation to divide the data into 5 folds to obtain training and testing sets. We further cross-validated each training set into 5 folds to optimize the parameters of the algorithm using grid search over a set of ranges. We chose to measure quality using area under the receiver operator characteristic curve (AUC) as a single, concise statistic. We found that the AUC for best classifiers ranged from 0.88 to 0.93. We also ran our experiment on the subset of Simplest Features, and found that the AUC ranged from 0.82 to 0.83. Finally, we measured the performance of the operationalized scoring systems; the best ones ranged from 0.70 to 0.73. Complete results can be found in Table 2. 3.3. Human Interpretable Machine Learning 3.3.1. DEFINITION OF INTERPRETABILITY To ensure that we produced models that can be used and accepted in a clinical context, we obtained guidelines from clinicians. This led us to focus on three components in defining complexity: Computational complexity: the models should be relatively easy to compute, requiring",
"title": ""
},
{
"docid": "fba5b69c3b0afe9f39422db8c18dba06",
"text": "It is well known that stressful experiences may affect learning and memory processes. Less clear is the exact nature of these stress effects on memory: both enhancing and impairing effects have been reported. These opposite effects may be explained if the different time courses of stress hormone, in particular catecholamine and glucocorticoid, actions are taken into account. Integrating two popular models, we argue here that rapid catecholamine and non-genomic glucocorticoid actions interact in the basolateral amygdala to shift the organism into a 'memory formation mode' that facilitates the consolidation of stressful experiences into long-term memory. The undisturbed consolidation of these experiences is then promoted by genomic glucocorticoid actions that induce a 'memory storage mode', which suppresses competing cognitive processes and thus reduces interference by unrelated material. Highlighting some current trends in the field, we further argue that stress affects learning and memory processes beyond the basolateral amygdala and hippocampus and that stress may pre-program subsequent memory performance when it is experienced during critical periods of brain development.",
"title": ""
},
{
"docid": "7e671e124f330ae91ad5567cf80500cb",
"text": "In recent years, LTE (Long Term Evolution) has been one of the mainstreams of current wireless communication systems. But when its HSS authenticates UEs, the random number RAND generated by HSS for creating other keys during its delivery from HSS to UE is unencrypted. Also, many parameters are generated by invoking a function with only one input key, thus very easily to be cracked. So in this paper, we propose an improved approach in which the Diffie-Hellman algorithm is employed to solve the exposure problem of RAND in the authentication process, and an Pair key mechanism is deployed when creating other parameters, i.e., parameters are generated by invoking a function with at least two input keys. The purpose is increasing the security levels of all generated parameters so as to make LTE more secure than before.",
"title": ""
},
{
"docid": "635da218aa9a1b528fbc378844b393fd",
"text": "A variety of nonlinear, including semidefinite, relaxations have been developed in recent years for nonconvex optimization problems. Their potential can be realized only if they can be solved with sufficient speed and reliability. Unfortunately, state-of-the-art nonlinear programming codes are significantly slower and numerically unstable compared to linear programming software. In this paper, we facilitate the reliable use of nonlinear convex relaxations in global optimization via a polyhedral branch-and-cut approach. Our algorithm exploits convexity, either identified automatically or supplied through a suitable modeling language construct, in order to generate polyhedral cutting planes and relaxations for multivariate nonconvex problems. We prove that, if the convexity of a univariate or multivariate function is apparent by decomposing it into convex subexpressions, our relaxation constructor automatically exploits this convexity in a manner that is much superior to developing polyhedral outer approximators for the original function. The convexity of functional expressions that are composed to form nonconvex expressions is also automatically exploited. Root-node relaxations are computed for 87 problems from globallib and minlplib, and detailed computational results are presented for globally solving 26 of these problems with BARON 7.2, which implements the proposed techniques. The use of cutting planes for these problems reduces root-node relaxation gaps by up to 100% and expedites the solution process, often by several orders of magnitude.",
"title": ""
},
{
"docid": "e808fa6ebe5f38b7672fad04c5f43a3a",
"text": "A series of GeoVoCamps, run at least twice a year in locations in the U.S., have focused on ontology design patterns as an approach to inform metadata and data models, and on applications in the GeoSciences. In this note, we will redraw the brief history of the series as well as rationales for the particular approach which was chosen, and report on the ongoing uptake of the approach.",
"title": ""
},
{
"docid": "5ad4b3c5905b7b716a806432b755e60b",
"text": "The formation of both germline cysts and the germinal epithelium is described during the ovary development in Cyprinus carpio. As in the undifferentiated gonad of mammals, cords of PGCs become oogonia when they are surrounded by somatic cells. Ovarian differentiation is triggered when oogonia proliferate and enter meiosis, becoming oocytes. Proliferation of single oogonium results in clusters of interconnected oocytes, the germline cysts, that are encompassed by somatic prefollicle cells and form cell nests. Both PGCs and cell nests are delimited by a basement membrane. Ovarian follicles originate from the germline cysts, about the time of meiotic arrest, as prefollicle cells surround oocytes, individualizing them. They synthesize a basement membrane and an oocyte forms a follicle. With the formation of the stroma, unspecialized mesenchymal cells differentiate, and encompass each follicle, forming the theca. The follicle, basement membrane, and theca constitute the follicle complex. Along the ventral region of the differentiating ovary, the epithelium invaginates to form the ovigerous lamellae whose developing surface epithelium, the germinal epithelium, is composed of epithelial cells, germline cysts with oogonia, oocytes, and developing follicles. The germinal epithelium rests upon a basement membrane. The follicles complexes are connected to the germinal epithelium by a shared portion of basement membrane. In the differentiated ovary, germ cell proliferation in the epithelium forms nests in which there are the germline cysts. Germline cysts, groups of cells that form from a single founder cell and are joined by intercellular bridges, are conserved throughout the vertebrates, as is the germinal epithelium.",
"title": ""
},
{
"docid": "fe44269ca863c48108cd6ef07a9fbee5",
"text": "Heart disease prediction is designed to support clinicians in their diagnosis. We proposed a method for classifying the heart disease data. The patient’s record is predicted to find if they have symptoms of heart disease through Data mining. It is essential to find the best fit classification algorithm that has greater accuracy on classification in the case of heart disease prediction. Since the data is huge attribute selection method used for reducing the dataset. Then the reduced data is given to the classification .In the Investigation, the hybrid attribute selection method combining CFS and Filter Subset Evaluation gives better accuracy for classification. We also propose a new feature selection method algorithm which is the hybrid method combining CFS and Bayes Theorem. The proposed algorithm provides better accuracy compared to the traditional algorithm and the hybrid Algorithm CFS+FilterSubsetEval.",
"title": ""
},
{
"docid": "d3c3195b8272bd41d0095e236ddb1d96",
"text": "The extension of in vivo optical imaging for disease screening and image-guided surgical interventions requires brightly emitting, tissue-specific materials that optically transmit through living tissue and can be imaged with portable systems that display data in real-time. Recent work suggests that a new window across the short-wavelength infrared region can improve in vivo imaging sensitivity over near infrared light. Here we report on the first evidence of multispectral, real-time short-wavelength infrared imaging offering anatomical resolution using brightly emitting rare-earth nanomaterials and demonstrate their applicability toward disease-targeted imaging. Inorganic-protein nanocomposites of rare-earth nanomaterials with human serum albumin facilitated systemic biodistribution of the rare-earth nanomaterials resulting in the increased accumulation and retention in tumour tissue that was visualized by the localized enhancement of infrared signal intensity. Our findings lay the groundwork for a new generation of versatile, biomedical nanomaterials that can advance disease monitoring based on a pioneering infrared imaging technique.",
"title": ""
},
{
"docid": "4aed26d5f35f6059f4afe8cc7225f6a8",
"text": "The rapid and quick growth of smart mobile devices has caused users to demand pervasive mobile broadband services comparable to the fixed broadband Internet. In this direction, the research initiatives on 5G networks have gained accelerating momentum globally. 5G Networks will act as a nervous system of the digital society, economy, and everyday peoples life and will enable new future Internet of Services paradigms such as Anything as a Service, where devices, terminals, machines, also smart things and robots will become innovative tools that will produce and will use applications, services and data. However, future Internet will exacerbate the need for improved QoS/QoE, supported by services that are orchestrated on-demand and that are capable of adapt at runtime, depending on the contextual conditions, to allow reduced latency, high mobility, high scalability, and real time execution. A new paradigm called Fog Computing, or briefly Fog has emerged to meet these requirements. Fog Computing extends Cloud Computing to the edge of the network, reduces service latency, and improves QoS/QoE, resulting in superior user-experience. This paper provides a survey of 5G and Fog Computing technologies and their research directions, that will lead to Beyond-5G Network in the Fog.",
"title": ""
},
{
"docid": "7bdaa7eec3d2830ceceb2b398edb219b",
"text": "OBJECTIVES\nTo review how health informatics systems based on machine learning methods have impacted the clinical management of patients, by affecting clinical practice.\n\n\nMETHODS\nWe reviewed literature from 2010-2015 from databases such as Pubmed, IEEE xplore, and INSPEC, in which methods based on machine learning are likely to be reported. We bring together a broad body of literature, aiming to identify those leading examples of health informatics that have advanced the methodology of machine learning. While individual methods may have further examples that might be added, we have chosen some of the most representative, informative exemplars in each case.\n\n\nRESULTS\nOur survey highlights that, while much research is taking place in this high-profile field, examples of those that affect the clinical management of patients are seldom found. We show that substantial progress is being made in terms of methodology, often by data scientists working in close collaboration with clinical groups.\n\n\nCONCLUSIONS\nHealth informatics systems based on machine learning are in their infancy and the translation of such systems into clinical management has yet to be performed at scale.",
"title": ""
},
{
"docid": "dc48b68a202974f62ae63d1d14002adf",
"text": "In the speed sensorless vector control system, the amended method of estimating the rotor speed about model reference adaptive system (MRAS) based on radial basis function neural network (RBFN) for PMSM sensorless vector control system was presented. Based on the PI regulator, the radial basis function neural network which is more prominent learning efficiency and performance is combined with MRAS. The reference model and the adjust model are the PMSM itself and the PMSM current, respectively. The proposed scheme only needs the error signal between q axis estimated current and q axis actual current. Then estimated speed is gained by using RBFN regulator which adjusted error signal. Comparing study of simulation and experimental results between this novel sensorless scheme and the scheme in reference literature, the results show that this novel method is capable of precise estimating the rotor position and speed under the condition of high or low speed. It also possesses good performance of static and dynamic.",
"title": ""
},
{
"docid": "bbc936a3b4cd942ba3f2e1905d237b82",
"text": "Silkworm silk is among the most widely used natural fibers for textile and biomedical applications due to its extraordinary mechanical properties and superior biocompatibility. A number of physical and chemical processes have also been developed to reconstruct silk into various forms or to artificially produce silk-like materials. In addition to the direct use and the delicate replication of silk's natural structure and properties, there is a growing interest to introduce more new functionalities into silk while maintaining its advantageous intrinsic properties. In this review we assess various methods and their merits to produce functional silk, specifically those with color and luminescence, through post-processing steps as well as biological approaches. There is a highlight on intrinsically colored and luminescent silk produced directly from silkworms for a wide range of applications, and a discussion on the suitable molecular properties for being incorporated effectively into silk while it is being produced in the silk gland. With these understanding, a new generation of silk containing various functional materials (e.g., drugs, antibiotics and stimuli-sensitive dyes) would be produced for novel applications such as cancer therapy with controlled release feature, wound dressing with monitoring/sensing feature, tissue engineering scaffolds with antibacterial, anticoagulant or anti-inflammatory feature, and many others.",
"title": ""
},
{
"docid": "e7bfafee5cfaaa1a6a41ae61bdee753d",
"text": "Borderline personality disorder (BPD) has been shown to be a valid and reliable diagnosis in adolescents and associated with a decrease in both general and social functioning. With evidence linking BPD in adolescents to poor prognosis, it is important to develop a better understanding of factors and mechanisms contributing to the development of BPD. This could potentially enhance our knowledge and facilitate the design of novel treatment programs and interventions for this group. In this paper, we outline a theoretical model of BPD in adolescents linking the original mentalization-based theory of BPD, with recent extensions of the theory that focuses on hypermentalizing and epistemic trust. We then provide clinical case vignettes to illustrate this extended theoretical model of BPD. Furthermore, we suggest a treatment approach to BPD in adolescents that focuses on the reduction of hypermentalizing and epistemic mistrust. We conclude with an integration of theory and practice in the final section of the paper and make recommendations for future work in this area. (PsycINFO Database Record",
"title": ""
},
{
"docid": "14b15f15cb7dbb3c19a09323b4b67527",
"text": " Establishing mechanisms for sharing knowledge and technology among experts in different fields related to automated de-identification and reversible de-identification Providing innovative solutions for concealing, or removal of identifiers while preserving data utility and naturalness Investigating reversible de-identification and providing a thorough analysis of security risks of reversible de-identification Providing a detailed analysis of legal, ethical and social repercussion of reversible/non-reversible de-identification Promoting and facilitating the transfer of knowledge to all stakeholders (scientific community, end-users, SMEs) through workshops, conference special sessions, seminars and publications",
"title": ""
}
] |
scidocsrr
|
ef0878d4556e16bbb03bbc0313a7ee87
|
Offline Handwriting Recognition on Devanagari Using a New Benchmark Dataset
|
[
{
"docid": "9139eed82708f03a097ba0b383f5a346",
"text": "This paper presents a novel approach towards Indic handwritten word recognition using zone-wise information. Because of complex nature due to compound characters, modifiers, overlapping and touching, etc., character segmentation and recognition is a tedious job in Indic scripts (e.g. Devanagari, Bangla, Gurumukhi, and other similar scripts). To avoid character segmentation in such scripts, HMMbased sequence modeling has been used earlier in holistic way. This paper proposes an efficient word recognition framework by segmenting the handwritten word images horizontally into three zones (upper, middle and lower) and recognize the corresponding zones. The main aim of this zone segmentation approach is to reduce the number of distinct component classes compared to the total number of classes in Indic scripts. As a result, use of this zone segmentation approach enhances the recognition performance of the system. The components in middle zone where characters are mostly touching are recognized using HMM. After the recognition of middle zone, HMM based Viterbi forced alignment is applied to mark the left and right boundaries of the characters. Next, the residue components, if any, in upper and lower zones in their respective boundary are combined to achieve the final word level recognition. Water reservoir feature has been integrated in this framework to improve the zone segmentation and character alignment defects while segmentation. A novel sliding window-based feature, called Pyramid Histogram of Oriented Gradient (PHOG) is proposed for middle zone recognition. PHOG features has been compared with other existing features and found robust in Indic script recognition. An exhaustive experiment is performed on two Indic scripts namely, Bangla and Devanagari for the performance evaluation. From the experiment, it has been noted that proposed zone-wise recognition improves accuracy with respect to the traditional way of Indic word recognition.",
"title": ""
}
] |
[
{
"docid": "166b9cb75f8f81e3f143a44b1b3e0b99",
"text": "This study aimed to classify different emotional states by means of EEG-based functional connectivity patterns. Forty young participants viewed film clips that evoked the following emotional states: neutral, positive, or negative. Three connectivity indices, including correlation, coherence, and phase synchronization, were used to estimate brain functional connectivity in EEG signals. Following each film clip, participants were asked to report on their subjective affect. The results indicated that the EEG-based functional connectivity change was significantly different among emotional states. Furthermore, the connectivity pattern was detected by pattern classification analysis using Quadratic Discriminant Analysis. The results indicated that the classification rate was better than chance. We conclude that estimating EEG-based functional connectivity provides a useful tool for studying the relationship between brain activity and emotional states.",
"title": ""
},
{
"docid": "de016ffaace938c937722f8a47cc0275",
"text": "Conventional traffic light detection methods often suffers from false positives in urban environment because of the complex backgrounds. To overcome such limitation, this paper proposes a method that combines a conventional approach, which is fast but weak to false positives, and a DNN, which is not suitable for detecting small objects but a very powerful classifier. Experiments on real data showed promising results.",
"title": ""
},
{
"docid": "c66b529b1de24c8031622f3d28b3ada4",
"text": "This work addresses the design of a dual-fed aperture-coupled circularly polarized microstrip patch antenna, operating at its fundamental mode. A numerical parametric assessment was carried out, from which some general practical guidelines that may aid the design of such antennas were derived. Validation was achieved by a good match between measured and simulated results obtained for a specific antenna set assembled, chosen from the ensemble of the numerical analysis.",
"title": ""
},
{
"docid": "88de6047cec54692dea08abe752acd25",
"text": "Heap-based attacks depend on a combination of memory management error and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits but their effectiveness against future exploits has been uncertain. This paper presents the first formal treatment of the impact of allocator design on security. It analyzes a range of widely-deployed memory allocators, including those used by Windows, Linux, FreeBSD and OpenBSD, and shows that they remain vulnerable to attack. It them presents DieHarder, a new allocator whose design was guided by this analysis. DieHarder provides the highest degree of security from heap-based attacks of any practical allocator of which we are aware while imposing modest performance overhead. In particular, the Firefox web browser runs as fast with DieHarder as with the Linux allocator.",
"title": ""
},
{
"docid": "8a6e7ac784b63253497207c63caa1036",
"text": "Synchronized control (SYNC) is widely adopted for doubly fed induction generator (DFIG)-based wind turbine generators (WTGs) in microgrids and weak grids, which applies P-f droop control to achieve grid synchronization instead of phase-locked loop. The DFIG-based WTG with SYNC will reach a new equilibrium of rotor speed under frequency deviation, resulting in the WTG's acceleration or deceleration. The acceleration/deceleration process can utilize the kinetic energy stored in the rotating mass of WTG to provide active power support for the power grid, but the WTG may lose synchronous stability simultaneously. This stability problem occurs when the equilibrium of rotor speed is lost and the rotor speed exceeds the admissible range during the frequency deviations, which will be particularly analyzed in this paper. It is demonstrated that the synchronous stability can be improved by increasing the P-f droop coefficient. However, increasing the P-f droop coefficient will deteriorate the system's small signal stability. To address this contradiction, a modified synchronized control strategy is proposed. Simulation results verify the effectiveness of the analysis and the proposed control strategy.",
"title": ""
},
{
"docid": "66c8bf3b0cfbfdf8add2fffd055b7f03",
"text": "This paper continues the long-standing tradition of gradually improving the construction speed of spatial acceleration structures using sorted Morton codes. Previous work on this topic forms a clear sequence where each new paper sheds more light on the nature of the problem and improves the hierarchy generation phase in terms of performance, simplicity, parallelism and generality. Previous approaches constructed the tree by firstly generating the hierarchy and then calculating the bounding boxes of each node by using a bottom-up traversal. Continuing the work, we present an improvement by providing a bottom-up method that finds each node’s parent while assigning bounding boxes, thus constructing the tree in linear time in a single kernel launch. Also, our method allows clustering the sorted points using an user-defined distance metric function.",
"title": ""
},
{
"docid": "8a0ff953c06daa958da79c6c6d3cfc72",
"text": "Incremental Dynamic Analysis (IDA) is presented as a powerful tool to evaluate the variability in the seismic demand and capacity of non-deterministic structural models, building upon existing methodologies of Monte Carlo simulation and approximate moment-estimation. A nine-story steel moment-resisting frame is used as a testbed, employing parameterized moment-rotation relationships with non-deterministic quadrilinear backbones for the beam plastic-hinges. The uncertain properties of the backbones include the yield moment, the post-yield hardening ratio, the end-of-hardening rotation, the slope of the descending branch, the residual moment capacity and the ultimate rotation reached. IDA is employed to accurately assess the seismic performance of the model for any combination of the parameters by performing multiple nonlinear timehistory analyses for a suite of ground motion records. Sensitivity analyses on both the IDA and the static pushover level reveal the yield moment and the two rotational-ductility parameters to be the most influential for the frame behavior. To propagate the parametric uncertainty to the actual seismic performance we employ a) Monte Carlo simulation with latin hypercube sampling, b) point-estimate and c) first-order second-moment techniques, thus offering competing methods that represent different compromises between speed and accuracy. The final results provide firm ground for challenging current assumptions in seismic guidelines on using a median-parameter model to estimate the median seismic performance and employing the well-known square-root-sum-of-squares rule to combine aleatory randomness and epistemic uncertainty. Copyright c © 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "f61d8bcd3049f908784a7512d93010b4",
"text": "This paper presents the results from a feasibility study where an artificial neural network is applied to detect person-borne improvised explosive devices (IEDs) from imagery acquired using three different sensors; a radar array, an infrared (IR) camera, and a passive millimeter-wave camera. The data set was obtained from the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T), and consists of hundreds of images of human subjects concealing various simulated IEDs, and clutter objects, beneath different types of clothing. The network used for detection is a hybrid, where feature extraction is performed using a multi-layer convolutional neural network, also known as a deep learning network, and final classification performed using a support vector machine (SVM). The performance of the combined network is scored using receiver operating curves for each IED type and sensor configuration. The results demonstrate (i) that deep learning is effective at extracting useful information from sensor imagery, and (ii) that performance is boosted significantly by combining complementary data from different sensor types.",
"title": ""
},
{
"docid": "6a3cc8319b7a195ce7ec05a70ad48c7a",
"text": "Image caption generation is the problem of generating a descriptive sentence of an image. Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. This paper presents a brief survey of some technical aspects and methods for description-generation of images. As there has been great interest in research community, to come up with automatic ways to retrieve images based on content. There are numbers of techniques, that, have been used to solve this problem, and purpose of this paper is to have an overview of many of these approaches and databases used for description generation purpose. Finally, we discuss open challenges and future directions for upcoming researchers.",
"title": ""
},
{
"docid": "f2b4f786ecd63b454437f066deecfe4a",
"text": "The causal role of human papillomavirus (HPV) in all cancers of the uterine cervix has been firmly established biologically and epidemiologically. Most cancers of the vagina and anus are likewise caused by HPV, as are a fraction of cancers of the vulva, penis, and oropharynx. HPV-16 and -18 account for about 70% of cancers of the cervix, vagina, and anus and for about 30-40% of cancers of the vulva, penis, and oropharynx. Other cancers causally linked to HPV are non-melanoma skin cancer and cancer of the conjunctiva. Although HPV is a necessary cause of cervical cancer, it is not a sufficient cause. Thus, other cofactors are necessary for progression from cervical HPV infection to cancer. Long-term use of hormonal contraceptives, high parity, tobacco smoking, and co-infection with HIV have been identified as established cofactors; co-infection with Chlamydia trachomatis (CT) and herpes simplex virus type-2 (HSV-2), immunosuppression, and certain dietary deficiencies are other probable cofactors. Genetic and immunological host factors and viral factors other than type, such as variants of type, viral load and viral integration, are likely to be important but have not been clearly identified.",
"title": ""
},
{
"docid": "b07f858d08f40f61f3ed418674948f12",
"text": "Nowadays, due to the great distance between design and implementation worlds, different skills are necessary to create a game system. To solve this problem, a lot of strategies for game development, trying to increase the abstraction level necessary for the game production, were proposed. In this way, a lot of game engines, game frameworks and others, in most cases without any compatibility or reuse criteria between them, were developed. This paper presents a new generative programming approach, able to increase the production of a digital game by the integration of different game development artifacts, following a system family strategy focused on variable and common aspects of a computer game. As result, high level abstractions of games, based on a common language, can be used to configure met programming transformations during the game production, providing a great compatibility level between game domain and game implementation artifacts.",
"title": ""
},
{
"docid": "b1df1e6a6279501f45b65361e5a3917e",
"text": "Politicians have high expectations for commercial open data use. Yet, companies appear to challenge the assumption that open data can be used to create competitive advantage, since any company can access open data and since open data use requires scarce resources. In this paper we examine commercial open data use for creating competitive advantage from the perspective of Resource Based Theory (RBT) and Resource Dependency Theory (RDT). Based on insights from a scenario, interviews and a survey and from RBT and RDT as a reference theory, we derive seven propositions. Our study suggests that the generation of competitive advantage with open data requires a company to have in-house capabilities and resources for open data use. The actual creation of competitive advantage might not be simple. The propositions also draw attention to the accomplishment of unique benefits for a company through the combination of internal and external resources. Recommendations for further research include testing the propositions.",
"title": ""
},
{
"docid": "3f2312e385fc1c9aafc6f9f08e2e2d4f",
"text": "Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.",
"title": ""
},
{
"docid": "1288abeaddded1564b607c9f31924697",
"text": "Dynamic time warping (DTW) is used for the comparison and processing of nonlinear signals and constitutes a widely researched field of study. The method has been initially designed for, and applied to, signals representing audio data. Afterwords it has been successfully modified and applied to many other fields of study. In this paper, we present the results of researches on the generalized DTW method designed for use with rotational sets of data parameterized by quaternions. The need to compare and process quaternion time series has been gaining in importance recently. Three-dimensional motion data processing is one of the most important applications here. Specifically, it is applied in the context of motion capture, and in many cases all rotational signals are described in this way. We propose a construction of generalized method called quaternion dynamic time warping (QDTW), which makes use of specific properties of quaternion space. It allows for the creation of a family of algorithms that deal with the higher order features of the rotational trajectory. This paper focuses on the analysis of the properties of this new approach. Numerical results show that the proposed method allows for efficient element assignment. Moreover, when used as the measure of similarity for a clustering task, the method helps to obtain good clustering performance both for synthetic and real datasets.",
"title": ""
},
{
"docid": "87788e55769a7a840aaf41d9c3c5aec6",
"text": "Cyber-attack detection is used to identify cyber-attacks while they are acting on a computer and network system to compromise the security (e.g., availability, integrity, and confidentiality) of the system. This paper presents a cyber-attack detection technique through anomaly-detection, and discusses the robustness of the modeling technique employed. In this technique, a Markov-chain model represents a profile of computer-event transitions in a normal/usual operating condition of a computer and network system (a norm profile). The Markov-chain model of the norm profile is generated from historic data of the system's normal activities. The observed activities of the system are analyzed to infer the probability that the Markov-chain model of the norm profile supports the observed activities. The lower probability the observed activities receive from the Markov-chain model of the norm profile, the more likely the observed activities are anomalies resulting from cyber-attacks, and vice versa. This paper presents the learning and inference algorithms of this anomaly-detection technique based on the Markov-chain model of a norm profile, and examines its performance using the audit data of UNIX-based host machines with the Solaris operating system. The robustness of the Markov-chain model for cyber-attack detection is presented through discussions & applications. To apply the Markov-chain technique and other stochastic process techniques to model the sequential ordering of events, the quality of activity-data plays an important role in the performance of intrusion detection. The Markov-chain technique is not robust to noise in the data (the mixture level of normal activities and intrusive activities). The Markov-chain technique produces desirable performance only at a low noise level. This study also shows that the performance of the Markov-chain techniques is not always robust to the window size: as the window size increases, the amount of noise in the window also generally increases. Overall, this study provides some support for the idea that the Markov-chain technique might not be as robust as the other intrusion-detection methods such as the chi-square distance test technique , although it can produce better performance than the chi-square distance test technique when the noise level of the data is low, such as the Mill & Pascal data in this study.",
"title": ""
},
{
"docid": "d46172afedf3e86d64ee3c7dcfbd5c3c",
"text": "This paper compares the radial vibration forces in 10-pole/12-slot fractional-slot SPM and IPM machines which are designed to produce the same output torque, and employ an identical stator but different SPM, V-shape and arc-shape IPM rotor topologies. The airgap field and radial vibration force density distribution as a function of angular position and corresponding space harmonics (vibration modes) are analysed using the finite element method together with frozen permeability technique. It is shown that not only the lowest harmonic of radial force in IPM machine is much higher, but also the (2p)th harmonic of radial force in IPM machine is also higher than that in SPM machine.",
"title": ""
},
{
"docid": "6387707b2aba0400e517e427b26e4589",
"text": "This thesis investigates the phase noise of two different 2-stage cross-coupled pair unsaturated ring oscillators with no tail current source. One oscillator consists of top crosscoupled pair delay cells, and the other consists of top cross-coupled pair and bottom crosscoupled pair delay cells. Under a low supply voltage restriction, a phase noise model is developed and applied to both ring oscillators. Both top cross-coupled pair and top and bottom cross-coupled pair oscillators are fabricated with 0.13 μm CMOS technology. Phase noise measurements of -92 dBc/Hz and -89 dBc/Hz ,respectively, at 1 MHz offset is obtained from the chip, which agree with theory and simulations. Top cross-coupled ring oscillator, with phase noise of -92 dBc/Hz at 1 MHz offset, is implemented in a second order sigma-delta time to digital converter. System level and transistor level functional simulation and timing jitter simulation are obtained.",
"title": ""
},
{
"docid": "4d3b988de22e4630e1b1eff9e0d4551b",
"text": "In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.",
"title": ""
},
{
"docid": "44e0cd40b9a06abd5a4e54524b214dce",
"text": "A large majority of road accidents are relative to driver fatigue, distraction and drowsiness which are widely believed to be the largest contributors to fatalities and severe injuries, either as a direct cause of falling asleep at the wheel or as a contributing factor in lowering the attention and reaction time of a driver in critical situations. Thus to prevent road accidents, a countermeasure device has to be used. This paper illuminates and highlights the various measures that have been studied to detect drowsiness such as vehicle based, physiological based, and behavioural based measures. The main objective is to develop a real time non-contact system which will be able to identify driver’s drowsiness beforehand. The system uses an IR sensitive monochrome camera that detects the position and state of the eyes to calculate the drowsiness of a driver. Once the driver is detected as drowsy, the system will generate warning signals to alert the driver. In case the signal is not re-established the system will shut off the engine to prevent any mishap. Keywords— Drowsiness, Road Accidents, Eye Detection, Face Detection, Blink Pattern, PERCLOS, MATLAB, Arduino Nano",
"title": ""
}
] |
scidocsrr
|
87a02f40994745a087c6cb72768462c0
|
Personalized, Cross-Lingual TTS Using Phonetic Posteriorgrams
|
[
{
"docid": "0f6183057c6b61cefe90e4fa048ab47f",
"text": "This paper investigates the use of Deep Bidirectional Long Short-Term Memory based Recurrent Neural Networks (DBLSTM-RNNs) for voice conversion. Temporal correlations across speech frames are not directly modeled in frame-based methods using conventional Deep Neural Networks (DNNs), which results in a limited quality of the converted speech. To improve the naturalness and continuity of the speech output in voice conversion, we propose a sequence-based conversion method using DBLSTM-RNNs to model not only the frame-wised relationship between the source and the target voice, but also the long-range context-dependencies in the acoustic trajectory. Experiments show that DBLSTM-RNNs outperform DNNs where Mean Opinion Scores are 3.2 and 2.3 respectively. Also, DBLSTM-RNNs without dynamic features have better performance than DNNs with dynamic features.",
"title": ""
},
{
"docid": "e95541d0401a196b03b94dd51dd63a4b",
"text": "In the information age, computer applications have become part of modern life and this has in turn encouraged the expectations of friendly interaction with them. Speech, as “the” communication mode, has seen the successful development of quite a number of applications using automatic speech recognition (ASR), including command and control, dictation, dialog systems for people with impairments, translation, etc. But the actual challenge goes beyond the use of speech in control applications or to access information. The goal is to use speech as an information source, competing, for example, with text online. Since the technology supporting computer applications is highly dependent on the performance of the ASR system, research into ASR is still an active topic, as is shown by the range of research directions suggested in (Baker et al., 2009a, 2009b). Automatic speech recognition – the recognition of the information embedded in a speech signal and its transcription in terms of a set of characters, (Junqua & Haton, 1996) – has been object of intensive research for more than four decades, achieving notable results. It is only to be expected that speech recognition advances make spoken language as convenient and accessible as online text when the recognizers reach error rates near zero. But while digit recognition has already reached a rate of 99.6%, (Li, 2008), the same cannot be said of phone recognition, for which the best rates are still under 80% 1,(Mohamed et al., 2011; Siniscalchi et al., 2007). Speech recognition based on phones is very attractive since it is inherently free from vocabulary limitations. Large Vocabulary ASR (LVASR) systems’ performance depends on the quality of the phone recognizer. That is why research teams continue developing phone recognizers, in order to enhance their performance as much as possible. Phone recognition is, in fact, a recurrent problem for the speech recognition community. Phone recognition can be found in a wide range of applications. In addition to typical LVASR systems like (Morris & Fosler-Lussier, 2008; Scanlon et al., 2007; Schwarz, 2008), it can be found in applications related to keyword detection, (Schwarz, 2008), language recognition, (Matejka, 2009; Schwarz, 2008), speaker identification, (Furui, 2005) and applications for music identification and translation, (Fujihara & Goto, 2008; Gruhne et al., 2007). The challenge of building robust acoustic models involves applying good training algorithms to a suitable set of data. The database defines the units that can be trained and",
"title": ""
}
] |
[
{
"docid": "4c7fed8107062e530e80ae784451b752",
"text": "Tree structured models have been widely used for determining the pose of a human body, from either 2D or 3D data. While such models can effectively represent the kinematic constraints of the skeletal structure, they do not capture additional constraints such as coordination of the limbs. Tree structured models thus miss an important source of information about human body pose, as limb coordination is necessary for balance while standing, walking, or running, as well as being evident in other activities such as dancing and throwing. In this paper, we consider the use of undirected graphical models that augment a tree structure with latent variables in order to account for coordination between limbs. We refer to these as common-factor models, since they are constructed by using factor analysis to identify additional correlations in limb position that are not accounted for by the kinematic tree structure. These common-factor models have an underlying tree structure and thus a variant of the standard Viterbi algorithm for a tree can be applied for efficient estimation. We present some experimental results contrasting common-factor models with tree models, and quantify the improvement in pose estimation for 2D image data.",
"title": ""
},
{
"docid": "8c0f20061bd09b328748d256d5ece7cc",
"text": "Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.",
"title": ""
},
{
"docid": "55160cc3013b03704555863c710e6d21",
"text": "Localization is one of the most important capabilities for autonomous mobile agents. Markov Localization (ML), applied to dense range images, has proven to be an effective technique. But its computational and storage requirements put a large burden on robot systems, and make it difficult to update the map dynamically. In this paper we introduce a new technique, based on correlation of a sensor scan with the map, that is several orders of magnitude more efficient than M L . CBML (correlation-based ML) permits video-rate localization using dense range scans, dynamic map updates, and a more precise error model than M L . In this paper we present the basic method of CBML, and validate its efficiency and correctness in a series of experiments on an implemented mobile robot base.",
"title": ""
},
{
"docid": "abef126b2e8cb932378013e1cf125b15",
"text": "We describe our submission to the CoNLL 2017 shared task, which exploits the shared common knowledge of a language across different domains via a domain adaptation technique. Our approach is an extension to the recently proposed adversarial training technique for domain adaptation, which we apply on top of a graph-based neural dependency parsing model on bidirectional LSTMs. In our experiments, we find our baseline graphbased parser already outperforms the official baseline model (UDPipe) by a large margin. Further, by applying our technique to the treebanks of the same language with different domains, we observe an additional gain in the performance, in particular for the domains with less training data.",
"title": ""
},
{
"docid": "997a1ec16394a20b3a7f2889a583b09d",
"text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.",
"title": ""
},
{
"docid": "3dfe5dbdd83f0c56f403884f38420ae7",
"text": "There is an increasing interest in studying control systems employing multiple sensors and actuators that are geographically distributed. Communication is an important component of these distributed and networked control systems. Hence, there is a need to understand the interactions between the control components and the communication components of the distributed system. In this paper, we formulate a control problem with a communication channel connecting the sensor to the controller. Our task involves designing the channel encoder and channel decoder along with the controller to achieve different control objectives. We provide upper and lower bounds on the channel rate required to achieve these different control objectives. In many cases, these bounds are tight. In doing so, we characterize the \"information complexity\" of different control objectives.",
"title": ""
},
{
"docid": "a34a49a337cd0d198fe8bcc05f8a91ea",
"text": "In most real-world audio recordings, we encounter several types of audio events. In this paper, we develop a technique for detecting signature audio events, that is based on identifying patterns of occurrences of automatically learned atomic units of sound, which we call Acoustic Unit Descriptors or AUDs. Experiments show that the methodology works as well for detection of individual events and their boundaries in complex recordings.",
"title": ""
},
{
"docid": "b9546d8f52b19ba99bb9c8f4dc62f2bd",
"text": "One of the main unresolved problems that arise during the data mining process is treating data that contains temporal information. In this case, a complete understanding of the entire phenomenon requires that the data should be viewed as a sequence of events. Temporal sequences appear in a vast range of domains, from engineering, to medicine and finance, and the ability to model and extract information from them is crucial for the advance of the information society. This paper provides a survey on the most significant techniques developed in the past ten years to deal with temporal sequences.",
"title": ""
},
{
"docid": "05fa2bcd251f44f8a62e90104844926f",
"text": "A challenging task in the natural language question answering (Q/A for short) over RDF knowledge graph is how to bridge the gap between unstructured natural language questions (NLQ) and graph-structured RDF data (GOne of the effective tools is the \"template\", which is often used in many existing RDF Q/A systems. However, few of them study how to generate templates automatically. To the best of our knowledge, we are the first to propose a join approach for template generation. Given a workload D of SPARQL queries and a set N of natural language questions, the goal is to find some pairs q, n, for q∈ D ∧ n ∈, N, where SPARQL query q is the best match for natural language question n. These pairs provide promising hints for automatic template generation. Due to the ambiguity of the natural languages, we model the problem above as an uncertain graph join task. We propose several structural and probability pruning techniques to speed up joining. Extensive experiments over real RDF Q/A benchmark datasets confirm both the effectiveness and efficiency of our approach.",
"title": ""
},
{
"docid": "891efd54485c7cf73edd690e0d9b3cfa",
"text": "Quantitative-diffusion-tensor MRI consists of deriving and displaying parameters that resemble histological or physiological stains, i.e., that characterize intrinsic features of tissue microstructure and microdynamics. Specifically, these parameters are objective, and insensitive to the choice of laboratory coordinate system. Here, these two properties are used to derive intravoxel measures of diffusion isotropy and the degree of diffusion anisotropy, as well as intervoxel measures of structural similarity, and fiber-tract organization from the effective diffusion tensor, D, which is estimated in each voxel. First, D is decomposed into its isotropic and anisotropic parts, [D] I and D - [D] I, respectively (where [D] = Trace(D)/3 is the mean diffusivity, and I is the identity tensor). Then, the tensor (dot) product operator is used to generate a family of new rotationally and translationally invariant quantities. Finally, maps of these quantitative parameters are produced from high-resolution diffusion tensor images (in which D is estimated in each voxel from a series of 2D-FT spin-echo diffusion-weighted images) in living cat brain. Due to the high inherent sensitivity of these parameters to changes in tissue architecture (i.e., macromolecular, cellular, tissue, and organ structure) and in its physiologic state, their potential applications include monitoring structural changes in development, aging, and disease.",
"title": ""
},
{
"docid": "c795c3fbf976c5746c75eb33c622ad21",
"text": "We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems1.",
"title": ""
},
{
"docid": "9f84630422777d869edd7167ff6da443",
"text": "Video surveillance, closed-circuit TV and IP-camera systems became virtually omnipresent and indispensable for many organizations, businesses, and users. Their main purpose is to provide physical security, increase safety, and prevent crime. They also became increasingly complex, comprising many communication means, embedded hardware and non-trivial firmware. However, most research to date focused mainly on the privacy aspects of such systems, and did not fully address their issues related to cyber-security in general, and visual layer (i.e., imagery semantics) attacks in particular. In this paper, we conduct a systematic review of existing and novel threats in video surveillance, closed-circuit TV and IP-camera systems based on publicly available data. The insights can then be used to better understand and identify the security and the privacy risks associated with the development, deployment and use of these systems. We study existing and novel threats, along with their existing or possible countermeasures, and summarize this knowledge into a comprehensive table that can be used in a practical way as a security checklist when assessing cyber-security level of existing or new CCTV designs and deployments. We also provide a set of recommendations and mitigations that can help improve the security and privacy levels provided by the hardware, the firmware, the network communications and the operation of video surveillance systems. We hope the findings in this paper will provide a valuable knowledge of the threat landscape that such systems are exposed to, as well as promote further research and widen the scope of this field beyond its current boundaries.",
"title": ""
},
{
"docid": "c9966a6589d75b2ee87ba375e690de93",
"text": "The novel electronic properties of graphene, including a linear energy dispersion relation and purely two-dimensional structure, have led to intense research into possible applications of this material in nanoscale devices. Here we report the first observation of saturating transistor characteristics in a graphene field-effect transistor. The saturation velocity depends on the charge-carrier concentration and we attribute this to scattering by interfacial phonons in the SiO2 layer supporting the graphene channels. Unusual features in the current-voltage characteristic are explained by a field-effect model and diffusive carrier transport in the presence of a singular point in the density of states. The electrostatic modulation of the channel through an efficiently coupled top gate yields transconductances as high as 150 microS microm-1 despite low on-off current ratios. These results demonstrate the feasibility of two-dimensional graphene devices for analogue and radio-frequency circuit applications without the need for bandgap engineering.",
"title": ""
},
{
"docid": "1014a09fbded05ab4eb2438aa3631d2d",
"text": "In the last decade, self-myofascial release has become an increasingly common modality to supplement traditional methods of massage, so a masseuse is not necessary. However, there are limited clinical data demonstrating the efficacy or mechanism of this treatment on athletic performance. The purpose of this study was to determine whether the use of myofascial rollers before athletic tests can enhance performance. Twenty-six (13 men and 13 women) healthy college-aged individuals (21.56 ± 2.04 years, 23.97 ± 3.98 body mass index, 20.57 ± 12.21 percent body fat) were recruited. The study design was a randomized crossover design in which subject performed a series of planking exercises or foam rolling exercises and then performed a series of athletic performance tests (vertical jump height and power, isometric force, and agility). Fatigue, soreness, and exertion were also measured. A 2 × 2 (trial × gender) analysis of variance with repeated measures and appropriate post hoc was used to analyze the data. There were no significant differences between foam rolling and planking for all 4 of the athletic tests. However, there was a significant difference between genders on all the athletic tests (p ≤ 0.001). As expected, there were significant increases from pre to post exercise during both trials for fatigue, soreness, and exertion (p ≤ 0.01). Postexercise fatigue after foam rolling was significantly less than after the subjects performed planking (p ≤ 0.05). The reduced feeling of fatigue may allow participants to extend acute workout time and volume, which can lead to chronic performance enhancements. However, foam rolling had no effect on performance.",
"title": ""
},
{
"docid": "96ab2d8de746234c79e87902de49f343",
"text": "Background subtraction is one of the most commonly used components in machine vision systems. Despite the numerous algorithms proposed in the literature and used in practical applications, key challenges remain in designing a single system that can handle diverse environmental conditions. In this paper we present Multiple Background Model based Background Subtraction Algorithm as such a candidate. The algorithm was originally designed for handling sudden illumination changes. The new version has been refined with changes at different steps of the process, specifically in terms of selecting optimal color space, clustering of training images for Background Model Bank and parameter for each channel of color space. This has allowed the algorithm's applicability to wide variety of challenges associated with change detection including camera jitter, dynamic background, Intermittent Object Motion, shadows, bad weather, thermal, night videos etc. Comprehensive evaluation demonstrates the superiority of algorithm against state of the art.",
"title": ""
},
{
"docid": "61874faf29648c7d90b9cd24f6368a33",
"text": "Industry is currently undergoing a transformation towards full digitalization and intelligentization of manufacturing processes. Visionary but quite realistic concepts such as the Internet of Things, Industrial Internet, Cloud-based Manufacturing and Smart Manufacturing are drivers of the so called Fourth Industrial Revolution which is commonly referred to as Industry 4.0. Although a common agreement exists on the necessity for technological advancement of production technologies and business models in the sense of Industry 4.0, a major obstacle lies in the perceived complexity and abstractness which partly hinders its quick transformation into industrial practice. To overcome these burdens, we suggest a Scenario-based Industry 4.0 Learning Factory concept that we are currently planning to implement in Austria's first Industry 4.0 Pilot Factory. The concept is built upon a tentative competency model for Industry 4.0 and the use of scenarios for problem-oriented learning of future produc-",
"title": ""
},
{
"docid": "72c15ca427c2ba991a7dfd7a52d32a43",
"text": "From the ancient times, abortion appeared as a method of controlling the fertility. But starting with XIX century, modern states used the abortion as a mechanism of demographic policy, the state intending to adjust the fertility of population by this. It is worth underlining the ethical and moral aspects of abortion, which determined strong debates on the political scene and further at the level of civil society and mass media. So, in the last decades there have been questioned not only ethic and moral implications of abortion, but also practical outsets. A responsible demographic policy should take into account the whole set of socio-economic and cultural factors that condition a society.",
"title": ""
},
{
"docid": "b19aab238e0eafef52974a87300750a3",
"text": "This paper introduces a method to detect a fault associated with critical components/subsystems of an engineered system. It is required, in this case, to detect the fault condition as early as possible, with specified degree of confidence and a prescribed false alarm rate. Innovative features of the enabling technologies include a Bayesian estimation algorithm called particle filtering, which employs features or condition indicators derived from sensor data in combination with simple models of the system's degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme requires a fault progression model describing the degrading state of the system in the operation. A generic model based on fatigue analysis is provided and its parameters adaptation is discussed in detail. The scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level. The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig.",
"title": ""
},
{
"docid": "77c98efaba38e54e8aae1216ed9ac0c0",
"text": "There is a disconnect between explanatory artificial intelligence (XAI) methods and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don’t explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we focus on XAI methods for deep neural networks (DNNs) because of DNNs’ use in decision-making and inherent opacity. We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.",
"title": ""
},
{
"docid": "ef2738cfced7ef069b13e5b5dca1558b",
"text": "Organic agriculture (OA) is practiced on 1% of the global agricultural land area and its importance continues to grow. Specifically, OA is perceived by many as having less Advances inAgronomy, ISSN 0065-2113 © 2016 Elsevier Inc. http://dx.doi.org/10.1016/bs.agron.2016.05.003 All rights reserved. 1 ARTICLE IN PRESS",
"title": ""
}
] |
scidocsrr
|
d7e0b5a0e8d081c9cb08eaf06fe35909
|
Learning to Play Computer Games with Deep Learning and Reinforcement Learning Final Report
|
[
{
"docid": "28ee32149227e4a26bea1ea0d5c56d8c",
"text": "We consider an agent’s uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA’S REVENGE.",
"title": ""
}
] |
[
{
"docid": "1dee4c916308295626bce658529a8e0e",
"text": "Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs. One core idea of adversarial example research is to reveal neural network errors under such distribution shifts. We decompose these errors into two complementary sources: sensitivity and invariance. We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from -adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks. We show such excessive invariance occurs across various tasks and architecture types. On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations. We identify an insufficiency of the standard cross-entropy loss as a reason for these failures. Further, we extend this objective based on an informationtheoretic analysis so it encourages the model to consider all task-dependent features in its decision. This provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities.",
"title": ""
},
{
"docid": "40ad6bf9f233b58e13cf6a709daba2ca",
"text": "While syntactic dependency annotations concentrate on the surface or functional structure of a sentence, semantic dependency annotations aim to capture betweenword relationships that are more closely related to the meaning of a sentence, using graph-structured representations. We extend the LSTM-based syntactic parser of Dozat and Manning (2017) to train on and generate these graph structures. The resulting system on its own achieves stateof-the-art performance, beating the previous, substantially more complex stateof-the-art system by 1.9% labeled F1. Adding linguistically richer input representations pushes the margin even higher, allowing us to beat it by 2.6% labeled F1.",
"title": ""
},
{
"docid": "afbe496b98f6bb956cf22b5f08afec93",
"text": "The fibula osteoseptocutaneous flap is a versatile method for reconstruction of composite-tissue defects of the mandible. The vascularized fibula can be osteotomized to permit contouring of any mandibular defect. The skin flap is reliable and can be used to resurface intraoral, extraoral, or both intraoral and extraoral defects. Twenty-seven fibula osteoseptocutaneous flaps were used for composite mandibular reconstructions in 25 patients. All the defects were reconstructed primarily following resection of oral cancers (23), excision of radiation-induced osteonecrotic lesions (2), excision of a chronic osteomyelitic lesion (1), or postinfective mandibular hypoplasia (1). The mandibular defects were between 6 and 14 cm in length. The number of fibular osteotomy sites ranged from one to three. All patients had associated soft-tissue losses. Six of the reconstructions had only oral lining defects, and 1 had only an external facial defect, while 18 had both lining and skin defects. Five patients used the skin portion of the fibula osteoseptocutaneous flaps for both oral lining and external facial reconstruction, while 13 patients required a second simultaneous free skin or musculocutaneous flap because of the size of the defects. Four of these flaps used the distal runoff of the peroneal pedicles as the recipient vessels. There was one total flap failure (96.3 percent success). There were no instances of isolated partial or complete skin necrosis. All osteotomy sites healed primarily. The contour of the mandibles was good to excellent.",
"title": ""
},
{
"docid": "765b3b922a6d2cbc9f4af71e02b76f41",
"text": "We make clear why virtual currencies are of interest, how self-regulation has failed, and what useful lessons can be learned. Finally, we produce useful and semi-permanent findings into the usefulness of virtual currencies in general, blockchains as a means of mining currency, and the profundity of Bitcoin as compared with the development of block chain technologies. We conclude that though Bitcoin may be the equivalent of Second Life a decade later, so blockchains may be the equivalent of Web 2.0 social networks, a truly transformative social technology.",
"title": ""
},
{
"docid": "628c8b906e3db854ea92c021bb274a61",
"text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.",
"title": ""
},
{
"docid": "3dd6682c4307567e49b025d11b36b8a5",
"text": "Deep generative architectures provide a way to model not only images, but also complex, 3-dimensional objects, such as point clouds. In this work, we present a novel method to obtain meaningful representations of 3D shapes that can be used for clustering and reconstruction. Contrary to existing methods for 3D point cloud generation that train separate decoupled models for representation learning and generation, our approach is the first end-to-end solution that allows to simultaneously learn a latent space of representation and generate 3D shape out of it. To achieve this goal, we extend a deep Adversarial Autoencoder model (AAE) to accept 3D input and create 3D output. Thanks to our end-to-end training regime, the resulting method called 3D Adversarial Autoencoder (3dAAE) obtains either binary or continuous latent space that covers much wider portion of training data distribution, hence allowing smooth interpolation between the shapes. Finally, our extensive quantitative evaluation shows that 3dAAE provides state-of-theart results on a set of benchmark tasks.",
"title": ""
},
{
"docid": "773da4f213b7cbe7421c2f1481b71341",
"text": "To meet the demand of increasing mobile data traffic and provide better user experience, heterogeneous cellular networks (HCNs) have become a promising solution to improve both the system capacity and coverage. However, due to dense self-deployment of small cells in a limited area, serious interference from nearby base stations may occur, which results in severe performance degradation. To mitigate downlink interference and utilize spectrum resources more efficiently, we present a novel graph-based resource allocation and interference management approach in this paper. First, we divide small cells into cell clusters, considering their neighborhood relationships in the scenario. Then, we develop another graph clustering scheme to group user equipment (UE) in each cell cluster into UE clusters with minimum intracluster interference. Finally, we utilize a proportional fairness scheduling scheme to assign subchannels to each UE cluster and allocate power using water-filling method. To show the efficacy and effectiveness of our proposed approach, we propose a dual-based approach to search for optimal solutions as the baseline for comparisons. Furthermore, we compare the graph-based approach with the state of the art and a distributed approach without interference coordination. The simulation results show that our graph-based approach reaches more than 90% of the optimal performance and achieves a significant improvement in spectral efficiency compared with the state of the art and the distributed approach both under cochannel and orthogonal deployments. Moreover, the proposed graph-based approach has low computation complexity, making it feasible for real-time implementation.",
"title": ""
},
{
"docid": "58b4320c2cf52c658275eaa4748dede5",
"text": "Backing-out and heading-out maneuvers in perpendicular or angle parking lots are one of the most dangerous maneuvers, especially in cases where side parked cars block the driver view of the potential traffic flow. In this paper, a new vision-based Advanced Driver Assistance System (ADAS) is proposed to automatically warn the driver in such scenarios. A monocular grayscale camera was installed at the back-right side of a vehicle. A Finite State Machine (FSM) defined according to three CAN Bus variables and a manual signal provided by the user is used to handle the activation/deactivation of the detection module. The proposed oncoming traffic detection module computes spatio-temporal images from a set of predefined scan-lines which are related to the position of the road. A novel spatio-temporal motion descriptor is proposed (STHOL) accounting for the number of lines, their orientation and length of the spatio-temporal images. Some parameters of the proposed descriptor are adapted for nighttime conditions. A Bayesian framework is then used to trigger the warning signal using multivariate normal density functions. Experiments are conducted on image data captured from a vehicle parked at different location of an urban environment, including both daytime and nighttime lighting conditions. We demonstrate that the proposed approach provides robust results maintaining processing rates close to real time.",
"title": ""
},
{
"docid": "ee833203c939cfa9c5ab4135a75e1559",
"text": "The multiconstraint 0-1 knapsack problem is encountered when one has to decide how to use a knapsack with multiple resource constraints. Even though the single constraint version of this problem has received a lot of attention, the multiconstraint knapsack problem has been seldom addressed. This paper deals with developing an effective solution procedure for the multiconstraint knapsack problem. Various relaxations of the problem are suggested and theoretical relations between these relaxations are pointed out. Detailed computational experiments are carried out to compare bounds produced by these relaxations. New algorithms for obtaining surrogate bounds are developed and tested. Rules for reducing problem size are suggested and shown to be effective through computational tests. Different separation, branching and bounding rules are compared using an experimental branch and bound code. An efficient branch and bound procedure is developed, tested and compared with two previously developed optimal algorithms. Solution times with the new procedure are found to be considerably lower. This procedure can also be used as a heuristic for large problems by early termination of the search tree. This scheme was tested and found to be very effective.",
"title": ""
},
{
"docid": "f2205324dbf3a828e695854402ebbafe",
"text": "Current research in law and neuroscience is promising to answer these questions with a \"yes.\" Some legal scholars working in this area claim that we are close to realizing the \"early criminologists' dream of identifying the biological roots of criminality.\" These hopes for a neuroscientific transformation of the criminal law, although based in the newest research, are part of a very old story. Criminal law and neuroscience have been engaged in an ill-fated and sometimes tragic affair for over two hundred years. Three issues have recurred that track those that bedeviled earlier efforts to ground criminal law in brain sciences. First is the claim that the brain is often the most relevant or fundamental level at which to understand criminal conduct. Second is that the various phenomena we call \"criminal violence\" arise causally from dysfunction within specific locations in the brain (\"localization\"). Third is the related claim that, because much violent criminality arises from brain dysfunction, people who commit such acts are biologically different from typical people (\"alterity\" or \"otherizing\").",
"title": ""
},
{
"docid": "acab6a0a8b5e268cd0a5416bd00b4f55",
"text": "We propose SocialFilter, a trust-aware collaborative spam mitigation system. Our proposal enables nodes with no email classification functionality to query the network on whether a host is a spammer. It employs Sybil-resilient trust inference to weigh the reports concerning spamming hosts that collaborating spam-detecting nodes (reporters) submit to the system. It weighs the spam reports according to the trustworthiness of their reporters to derive a measure of the system's belief that a host is a spammer. SocialFilter is the first collaborative unwanted traffic mitigation system that assesses the trustworthiness of spam reporters by both auditing their reports and by leveraging the social network of the reporters' administrators. The design and evaluation of our proposal offers us the following lessons: a) it is plausible to introduce Sybil-resilient Online-Social-Network-based trust inference mechanisms to improve the reliability and the attack-resistance of collaborative spam mitigation; b) using social links to obtain the trustworthiness of reports concerning spammers can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra [27]); c) unlike Ostra, in the absence of reports that incriminate benign email senders, SocialFilter yields no false positives.",
"title": ""
},
{
"docid": "086269223c00209787310ee9f0bcf875",
"text": "The availability of large annotated datasets and affordable computation power have led to impressive improvements in the performance of CNNs on various object detection and recognition benchmarks. These, along with a better understanding of deep learning methods, have also led to improved capabilities of machine understanding of faces. CNNs are able to detect faces, locate facial landmarks, estimate pose, and recognize faces in unconstrained images and videos. In this paper, we describe the details of a deep learning pipeline for unconstrained face identification and verification which achieves state-of-the-art performance on several benchmark datasets. We propose a novel face detector, Deep Pyramid Single Shot Face Detector (DPSSD), which is fast and capable of detecting faces with large scale variations (especially tiny faces). We give design details of the various modules involved in automatic face recognition: face detection, landmark localization and alignment, and face identification/verification. We provide evaluation results of the proposed face detector on challenging unconstrained face detection datasets. Then, we present experimental results for IARPA Janus Benchmarks A, B and C (IJB-A, IJB-B, IJB-C), and the Janus Challenge Set 5 (CS5).",
"title": ""
},
{
"docid": "cdee51ab9562e56aee3fff58cd2143ba",
"text": "Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.",
"title": ""
},
{
"docid": "74593565b633d29041637d877428a0a4",
"text": "The kinematics of contact describe the motion of a point of contact over the surfaces of two contacting objects in response to a relative motion of these objects. Using concepts from differential geometry, I derive a set of equations, called the contact equations, that embody this relationship. I employ the contact equations to design the following applications to be executed by an end-effector with tactile sensing capability: (1) determining the curvature form of an unknown object at a point of contact; and (2) following the surface of an unknown object. The contact equations also serve as a basis for an investigation of the kinematics of grasp. I derive the relationship between the relative motion of two fingers grasping an object and the motion of the points of contact over the object surface. Based on this analysis, we explore the following applications: (1) rolling a sphere between two arbitrarily shaped fingers ; (2) fine grip adjustment (i.e., having two fingers that grasp an unknown object locally optimize their grip for maximum stability ).",
"title": ""
},
{
"docid": "915ad4f43eef7db8fb24080f8389b424",
"text": "This paper details the design and architecture of a series elastic actuated snake robot, the SEA Snake. The robot consists of a series chain of 1-DOF modules that are capable of torque, velocity and position control. Additionally, each module includes a high-speed Ethernet communications bus, internal IMU, modular electro-mechanical interface, and ARM based on-board control electronics.",
"title": ""
},
{
"docid": "66f6668f2c96b602a1f3be67e1f79e87",
"text": "Web advertising is the primary driving force behind many Web activities, including Internet search as well as publishing of online content by third-party providers. Even though the notion of online advertising barely existed a decade ago, the topic is so complex that it attracts attention of a variety of established scientific disciplines, including computational linguistics, computer science, economics, psychology, and sociology, to name but a few. Consequently, a new discipline — Computational Advertising — has emerged, which studies the process of advertising on the Internet from a variety of angles. A successful advertising campaign should be relevant to the immediate user’s information need as well as more generally to user’s background and personalized interest profile, be economically worthwhile to the advertiser and the intermediaries (e.g., the search engine), as well as be aesthetically pleasant and not detrimental to user experience.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
},
{
"docid": "58359b7b3198504fa2475cc0f20ccc2d",
"text": "OBJECTIVES\nTo review and synthesize the state of research on a variety of meditation practices, including: the specific meditation practices examined; the research designs employed and the conditions and outcomes examined; the efficacy and effectiveness of different meditation practices for the three most studied conditions; the role of effect modifiers on outcomes; and the effects of meditation on physiological and neuropsychological outcomes.\n\n\nDATA SOURCES\nComprehensive searches were conducted in 17 electronic databases of medical and psychological literature up to September 2005. Other sources of potentially relevant studies included hand searches, reference tracking, contact with experts, and gray literature searches.\n\n\nREVIEW METHODS\nA Delphi method was used to develop a set of parameters to describe meditation practices. Included studies were comparative, on any meditation practice, had more than 10 adult participants, provided quantitative data on health-related outcomes, and published in English. Two independent reviewers assessed study relevance, extracted the data and assessed the methodological quality of the studies.\n\n\nRESULTS\nFive broad categories of meditation practices were identified (Mantra meditation, Mindfulness meditation, Yoga, Tai Chi, and Qi Gong). Characterization of the universal or supplemental components of meditation practices was precluded by the theoretical and terminological heterogeneity among practices. Evidence on the state of research in meditation practices was provided in 813 predominantly poor-quality studies. The three most studied conditions were hypertension, other cardiovascular diseases, and substance abuse. Sixty-five intervention studies examined the therapeutic effect of meditation practices for these conditions. Meta-analyses based on low-quality studies and small numbers of hypertensive participants showed that TM(R), Qi Gong and Zen Buddhist meditation significantly reduced blood pressure. Yoga helped reduce stress. Yoga was no better than Mindfulness-based Stress Reduction at reducing anxiety in patients with cardiovascular diseases. No results from substance abuse studies could be combined. The role of effect modifiers in meditation practices has been neglected in the scientific literature. The physiological and neuropsychological effects of meditation practices have been evaluated in 312 poor-quality studies. Meta-analyses of results from 55 studies indicated that some meditation practices produced significant changes in healthy participants.\n\n\nCONCLUSIONS\nMany uncertainties surround the practice of meditation. Scientific research on meditation practices does not appear to have a common theoretical perspective and is characterized by poor methodological quality. Firm conclusions on the effects of meditation practices in healthcare cannot be drawn based on the available evidence. Future research on meditation practices must be more rigorous in the design and execution of studies and in the analysis and reporting of results.",
"title": ""
},
{
"docid": "7ee4a708d41065c619a5bf9e86f871a3",
"text": "Cyber attack comes in various approach and forms, either internally or externally. Remote access and spyware are forms of cyber attack leaving an organization to be susceptible to vulnerability. This paper investigates illegal activities and potential evidence of cyber attack through studying the registry on the Windows 7 Home Premium (32 bit) Operating System in using the application Virtual Network Computing (VNC) and keylogger application. The aim is to trace the registry artifacts left by the attacker which connected using Virtual Network Computing (VNC) protocol within Windows 7 Operating System (OS). The analysis of the registry focused on detecting unwanted applications or unauthorized access to the machine with regard to the user activity via the VNC connection for the potential evidence of illegal activities by investigating the Registration Entries file and image file using the Forensic Toolkit (FTK) Imager. The outcome of this study is the findings on the artifacts which correlate to the user activity.",
"title": ""
}
] |
scidocsrr
|
f07651c61e702fa46a645f7517009d3f
|
Is There a Cost to Privacy Breaches? An Event Study
|
[
{
"docid": "b62da3e709d2bd2c7605f3d0463eff2f",
"text": "This study examines the economic effect of information security breaches reported in newspapers on publicly traded US corporations. We find limited evidence of an overall negative stock market reaction to public announcements of information security breaches. However, further investigation reveals that the nature of the breach affects this result. We find a highly significant negative market reaction for information security breaches involving unauthorized access to confidential data, but no significant reaction when the breach does not involve confidential information. Thus, stock market participants appear to discriminate across types of breaches when assessing their economic impact on affected firms. These findings are consistent with the argument that the economic consequences of information security breaches vary according to the nature of the underlying assets affected by the breach.",
"title": ""
}
] |
[
{
"docid": "17c987c76e3b77bd96e7b20eea0b7ed8",
"text": "Due to the complexity of built environment, urban design patterns considerably affect the microclimate and outdoor thermal comfort in a given urban morphology. Variables such as building heights and orientations, spaces between buildings, plot coverage alter solar access, wind speed and direction at street level. To improve microclimate and comfort conditions urban design elements including vegetation and shading devices can be used. In warm-humid Dar es Salaam, the climate consideration in urban design has received little attention although the urban planning authorities try to develop the quality of planning and design. The main aim of this study is to investigate the relationship between urban design, urban microclimate, and outdoor comfort in four built-up areas with different morphologies including low-, medium-, and high-rise buildings. The study mainly concentrates on the warm season but a comparison with the thermal comfort conditions in the cool season is made for one of the areas. Air temperature, wind speed, mean radiant temperature (MRT), and the physiologically equivalent temperature (PET) are simulated using ENVI-met to highlight the strengths and weaknesses of the existing urban design. An analysis of the distribution of MRT in the areas showed that the area with low-rise buildings had the highest frequency of high MRTs and the lowest frequency of low MRTs. The study illustrates that areas with low-rise buildings lead to more stressful urban spaces than areas with high-rise buildings. It is also shown that the use of dense trees helps to enhance the thermal comfort conditions, i.e., reduce heat stress. However, vegetation might negatively affect the wind ventilation. Nevertheless, a sensitivity analysis shows that the provision of shade is a more efficient way to reduce PET than increases in wind speed, given the prevailing sun and wind conditions in Dar es Salaam. To mitigate heat stress in Dar es Salaam, a set of recommendations and guidelines on how to develop the existing situation from microclimate and thermal comfort perspectives is outlined. Such recommendations will help architects and urban designers to increase the quality of the outdoor environment and demonstrate the need to create better urban spaces in harmony with microclimate and thermal comfort.",
"title": ""
},
{
"docid": "da414d5fce36272332a1a558e35e4b9a",
"text": "IoT service in home domain needs common and effective ways to manage various appliances and devices. So, the home environment needs a gateway that provides dynamical device registration and discovery. In this paper, we propose the IoT Home Gateway that supports abstracted device data to remove heterogeneity, device discovery by DPWS, Auto-configuration for constrained devices such as Arduino. Also, the IoT Home Gateway provides lightweight information delivery using MQTT protocol. In addition, we show implementation results that access and control the device according to the home energy saving scenario.",
"title": ""
},
{
"docid": "20966efc2278b0a2129b44c774331899",
"text": "In current literature, grief play in Massively Multi-player Online Role-Playing Games (MMORPGs) refers to play styles where a player intentionally disrupts the gaming experience of other players. In our study, we have discovered that player experiences may be disrupted by others unintentionally, and under certain circumstances, some will believe they have been griefed. This paper explores the meaning of grief play, and suggests that some forms of unintentional grief play be called greed play. The paper suggests that greed play be treated as griefing, but a more subtle form. It also investigates the different types of griefing and establishes a taxonomy of terms in grief play.",
"title": ""
},
{
"docid": "eabb50988aeb711995ff35833a47770d",
"text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.",
"title": ""
},
{
"docid": "409a45b65fdd9e85ae54265c44863db5",
"text": "Use of leaf meters to provide an instantaneous assessment of leaf chlorophyll has become common, but calibration of meter output into direct units of leaf chlorophyll concentration has been difficult and an understanding of the relationship between these two parameters has remained elusive. We examined the correlation of soybean (Glycine max) and maize (Zea mays L.) leaf chlorophyll concentration, as measured by organic extraction and spectrophotometric analysis, with output (M) of the Minolta SPAD-502 leaf chlorophyll meter. The relationship is non-linear and can be described by the equation chlorophyll (μmol m−2)=10(M0.265), r 2=0.94. Use of such an exponential equation is theoretically justified and forces a more appropriate fit to a limited data set than polynomial equations. The exact relationship will vary from meter to meter, but will be similar and can be readily determined by empirical methods. The ability to rapidly determine leaf chlorophyll concentrations by use of the calibration method reported herein should be useful in studies on photosynthesis and crop physiology.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "d56e64ac41b4437a4c1409f17a6c7cf2",
"text": "A high step-up forward flyback converter with nondissipative snubber for solar energy application is introduced here. High gain DC/DC converters are the key part of renewable energy systems .The designing of high gain DC/DC converters is imposed by severe demands. It produces high step-up voltage gain by using a forward flyback converter. The energy in the coupled inductor leakage inductance can be recycled via a nondissipative snubber on the primary side. It consists of a combination of forward and flyback converter on the secondary side. It is a hybrid type of forward and flyback converter, sharing the transformer for increasing the utilization factor. By stacking the outputs of them, extremely high voltage gain can be obtained with small volume and high efficiency even with a galvanic isolation. The separated secondary windings in low turn-ratio reduce the voltage stress of the secondary rectifiers, contributing to achievement of high efficiency. Here presents a high step-up topology employing a series connected forward flyback converter, which has a series connected output for high boosting voltage-transfer gain. A MATLAB/Simulink model of the Photo Voltaic (PV) system using Maximum Power Point Tracking (MPPT) has been implimented along with a DC/DC hardware prototype.",
"title": ""
},
{
"docid": "0997c292d6518b17991ce95839d9cc78",
"text": "A word's sentiment depends on the domain in which it is used. Computational social science research thus requires sentiment lexicons that are specific to the domains being studied. We combine domain-specific word embeddings with a label propagation framework to induce accurate domain-specific sentiment lexicons using small sets of seed words. We show that our approach achieves state-of-the-art performance on inducing sentiment lexicons from domain-specific corpora and that our purely corpus-based approach outperforms methods that rely on hand-curated resources (e.g., WordNet). Using our framework, we induce and release historical sentiment lexicons for 150 years of English and community-specific sentiment lexicons for 250 online communities from the social media forum Reddit. The historical lexicons we induce show that more than 5% of sentiment-bearing (non-neutral) English words completely switched polarity during the last 150 years, and the community-specific lexicons highlight how sentiment varies drastically between different communities.",
"title": ""
},
{
"docid": "0f72c9034647612097c2096d1f31c980",
"text": "We tackle a fundamental problem to detect and estimate just noticeable blur (JNB) caused by defocus that spans a small number of pixels in images. This type of blur is common during photo taking. Although it is not strong, the slight edge blurriness contains informative clues related to depth. We found existing blur descriptors based on local information cannot distinguish this type of small blur reliably from unblurred structures. We propose a simple yet effective blur feature via sparse representation and image decomposition. It directly establishes correspondence between sparse edge representation and blur strength estimation. Extensive experiments manifest the generality and robustness of this feature.",
"title": ""
},
{
"docid": "4737fe7f718f79c74595de40f8778da2",
"text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.",
"title": ""
},
{
"docid": "af2dbbec77616bed893d964e6a822db0",
"text": "Most existing APL implementations are interpretive in nature,that is, each time an APL statement is encountered it is executedby a body of code that is perfectly general, i.e. capable ofevaluating any APL expression, and is in no way tailored to thestatement on hand. This costly generality is said to be justifiedbecause APL variables are typeless and thus can vary arbitrarily intype, shape, and size during the execution of a program. What thisargument overlooks is that the operational semantics of an APLstatement are not modified by the varying storage requirements ofits variables.\nThe first proposal for a non fully interpretive implementationwas the thesis of P. Abrams [1], in which a high level interpretercan defer performing certain operations by compiling code which alow level interpreter must later be called upon to execute. Thebenefit thus gained is that intelligence gathered from a widercontext can be brought to bear on the evaluation of asubexpression. Thus on evaluating (A+B)[I],only the addition A[I]+B[I] will beperformed. More recently, A. Perlis and several of his students atYale [9,10] have presented a scheme by which a full-fledged APLcompiler can be written. The compiled code generated can then bevery efficiently executed on a specialized hardware processor. Asimilar scheme is used in the newly released HP/3000 APL [12].\nThis paper builds on and extends the above ideas in severaldirections. We start by studying in some depth the two key notionsall this work has in common, namely compilation anddelayed evaluation in the context of APL. By delayedevaluation we mean the strategy of deferring the computation ofintermediate results until the moment they are needed. Thus largeintermediate expressions are not built in storage; instead theirelements are \"streamed\" in time. Delayed evaluation for APL wasprobably first proposed by Barton (see [8]).\nMany APL operators do not correspond to any real dataoperations. Instead their effect is to rename the elements of thearray they act upon. A wide class of such operators, which we willcall the grid selectors, can be handled by essentiallypushing them down the expression tree and incorporating theireffect into the leaf accessors. Semantically this is equivalent tothe drag-along transformations described by Abrams.Performing this optimization will be shown to be an integral partof delayed evaluation.\nIn order to focus our attention on the above issues, we make anumber of simplifying assumptions. We confine our attention to codecompilation for single APL expressions, such as might occur in an\"APL Calculator\", where user defined functions are not allowed. Ofcourse we will be critically concerned with the re-usability of thecompiled code for future evaluations. We also ignore thedistinctions among the various APL primitive types and assume thatall our arrays are of one uniform numeric type. We have studied thesituation without these simplifying assumptions, but plan to reporton this elsewhere.\nThe following is a list of the main contributions of thispaper.\n\" We present an algorithm for incorporating the selectoroperators into the accessors for the leaves of the expression tree.The algorithm runs in time proportional to the size of the tree, asopposed to its path length (which is the case for the algorithms of[10] and [12]).\nAlthough arbitrary reshapes cannot be handled by the abovealgorithm, an especially important case can: that of aconforming reshape. The reshape AñB iscalled conforming if ñB is a suffix of A.\n\" By using conforming reshapes we can eliminate inner and outerproducts from the expression tree and replace them with scalaroperators and reductions along the last dimension. We do this byintroducing appropriate selectors on the product arguments, theneventually absorbing these selectors into the leaf accessors. Thesame mechanism handles scalar extension, the convention ofmaking scalar operands of scalar operators conform to arbitraryarrays.\n\" Once products, scalar extensions, and selectors have beeneliminated, what is left is an expression tree consisting entirelyof scalar operators and reductions along the last dimension. As aconsequence, during execution, the dimension currently being workedon obeys a strict stack-like discipline. This implies that we cangenerate extremely efficient code that is independent of theranks of the arguments.\nSeveral APL operators use the elements of their operands severaltimes. A pure delayed evaluation strategy would require multiplereevaluations.\n\" We introduce a general buffering mechanism, calledslicing, which allows portions of a subexpression that willbe repeatedly needed to be saved, to avoid future recomputation.Slicing is well integrated with the evaluation on demand mechanism.For example, when operators that break the streaming areencountered, slicing is used to determine the minimum size bufferrequired between the order in which a subexpression can deliver itsresult, and the order in which the full expression needs it.\n\" The compiled code is very efficient. A minimal number of loopvariables is maintained and accessors are shared among as manyexpression atoms as possible. Finally, the code generated is wellsuited for execution by an ordinary minicomputer, such as a PDP-11,or a Data General Nova. We have implemented this compiler on theAlto computer at Xerox PARC.\nThe plan of the paper is this: We start with a generaldiscussion of compilation and delayed evaluation. Then we motivatethe structures and algorithms we need to introduce by showing howto handle a wider and wider class of the primitive APL operators.We discuss various ways of tailoring an evaluator for a particularexpression. Some of this tailoring is possible based only on theexpression itself, while other optimizations require knowledge ofthe (sizes of) the atom bindings in the expression. The readershould always be alert to the kind of knowledge being used, forthis affects the validity of the compiled code across reexecutionsof a statement.",
"title": ""
},
{
"docid": "6d285e0e8450791f03f95f58792c8f3c",
"text": "Basic psychology research suggests the possibility that confessions-a potent form of incrimination-may taint other evidence, thereby creating an appearance of corroboration. To determine if this laboratory-based phenomenon is supported in the high-stakes world of actual cases, we conducted an archival analysis of DNA exoneration cases from the Innocence Project case files. Results were consistent with the corruption hypothesis: Multiple evidence errors were significantly more likely to exist in false-confession cases than in eyewitness cases; in order of frequency, false confessions were accompanied by invalid or improper forensic science, eyewitness identifications, and snitches and informants; and in cases containing multiple errors, confessions were most likely to have been obtained first. We believe that these findings underestimate the problem and have important implications for the law concerning pretrial corroboration requirements and the principle of \"harmless error\" on appeal.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "d48bb823b5d4c6105b95f54f65ba3634",
"text": "When the terms “intelligence” or “intelligent” are used by scientists, they are referring to a large collection of human cognitive behaviors— people thinking. When life scientists speak of the intelligence of animals, they are asking us to call to mind a set of human behaviors that they are asserting the animals are (or are not) capable of. When computer scientists speak of artificial intelligence, machine intelligence, intelligent agents, or (as I chose to do in the title of this essay) computational intelligence, we are also referring to that set of human behaviors. Although intelligence meanspeople thinking, we might be able to replicate the same set of behaviors using computation. Indeed, one branch of modern cognitive psychology is based on the model that the human mind and brain are complex computational “engines,” that is, we ourselves are examples of computational intelligence.",
"title": ""
},
{
"docid": "d44dfc7e6ff28390f2dd9445641d664e",
"text": "A formal framework is presented for the characterization of cache allocation models in Information-Centric Networks (ICN). The framework is used to compare the performance of optimal caching everywhere in an ICN with opportunistic caching of content only near its consumers. This comparison is made using the independent reference model adopted in all prior studies, as well as a new model that captures non-stationary reference locality in space and time. The results obtained analytically and from simulations show that optimal caching throughout an ICN and opportunistic caching at the edge routers of an ICN perform comparably the same. In addition caching content opportunistically only near its consumers is shown to outperform the traditional on-path caching approach assumed in most ICN architectures in an unstructured network with arbitrary topology represented as a random geometric graph.",
"title": ""
},
{
"docid": "2c8061cf1c9b6e157bdebf9126b2f15c",
"text": "Recently, the concept of olfaction-enhanced multimedia applications has gained traction as a step toward further enhancing user quality of experience. The next generation of rich media services will be immersive and multisensory, with olfaction playing a key role. This survey reviews current olfactory-related research from a number of perspectives. It introduces and explains relevant olfactory psychophysical terminology, knowledge of which is necessary for working with olfaction as a media component. In addition, it reviews and highlights the use of, and potential for, olfaction across a number of application domains, namely health, tourism, education, and training. A taxonomy of research and development of olfactory displays is provided in terms of display type, scent generation mechanism, application area, and strengths/weaknesses. State of the art research works involving olfaction are discussed and associated research challenges are proposed.",
"title": ""
},
{
"docid": "8c80129507b138d1254e39acfa9300fc",
"text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\[email protected].",
"title": ""
},
{
"docid": "92f1979e78058acab3a634efa7ca9cf1",
"text": "This paper is an overview of current gyroscopes and their roles based on their applications. The considered gyroscopes include mechanical gyroscopes and optical gyroscopes at macro- and micro-scale. Particularly, gyroscope technologies commercially available, such as Mechanical Gyroscopes, silicon MEMS Gyroscopes, Ring Laser Gyroscopes (RLGs) and Fiber-Optic Gyroscopes (FOGs), are discussed. The main features of these gyroscopes and their technologies are linked to their performance.",
"title": ""
},
{
"docid": "718e61017414bb08616dd274cd1cdf02",
"text": "This paper focuses on the parameter estimation of transmission lines which is an essential prerequisite for system studies and relay settings. Synchrophasor measurements have shown promising potentials for the transmission line parameter estimation. Majority of existing techniques entail existence of phasor measurement units (PMUs) at both ends of a given transmission line; however, this assumption rarely holds true in nowadays power networks with few installed PMUs. In this paper, a practical technique is proposed for the estimation of transmission line parameters while the required data are phasor measurements at one end of a given line and conventional magnitude measurements at the other end. The proposed method is thus on the basis of joint PMU and supervisory control and data acquisition (SCADA) measurements. A non-linear weighted least-square error (NWLSE) algorithm is employed for the maximum-likelihood estimation of parameters. The approach is initially devised for simple transmission lines with two terminals; then, it is extended for three-terminal lines. Numerical studies encompassing two- and three-terminal lines are conducted through software and hardware simulations. The results demonstrate the effectiveness of new technique and verify its applicability in present power networks.",
"title": ""
},
{
"docid": "4f2a8e505a70c4204a2f36c4d8989713",
"text": "In our previous research, we examined whether minimally trained crowd workers could find, categorize, and assess sidewalk accessibility problems using Google Street View (GSV) images. This poster paper presents a first step towards combining automated methods (e.g., machine visionbased curb ramp detectors) in concert with human computation to improve the overall scalability of our approach.",
"title": ""
}
] |
scidocsrr
|
fb09a2ee30dab464632f395e45a61300
|
Anticipation and next action forecasting in video: an end-to-end model with memory
|
[
{
"docid": "6a72b09ce61635254acb0affb1d5496e",
"text": "We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual [15, 8]. Datasets have been developed for movies [11] and sports [12], but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead.",
"title": ""
}
] |
[
{
"docid": "9f6fb1de80f4500384097978c3712c68",
"text": "Reflection is a language feature which allows to analyze and transform the behavior of classes at the runtime. Reflection is used for software debugging and testing. Malware authors can leverage reflection to subvert the malware detection by static analyzers. Reflection initializes the class, invokes any method of class, or accesses any field of class. But, instead of utilizing usual programming language syntax, reflection passes classes/methods etc. as parameters to reflective APIs. As a consequence, these parameters can be constructed dynamically or can be encrypted by malware. These cannot be detected by state-of-the-art static tools. We propose EspyDroid, a system that combines dynamic analysis with code instrumentation for a more precise and automated detection of malware employing reflection. We evaluate EspyDroid on 28 benchmark apps employing major reflection categories. Our technique show improved results over FlowDroid via detection of additional undetected flows. These flows have potential to leak sensitive and private information of the users, through various sinks.",
"title": ""
},
{
"docid": "bb2e7ee3a447fd5bad57f2acd0f6a259",
"text": "A new cavity arrangement, namely, the generalized TM dual-mode cavity, is presented in this paper. In contrast with the previous contributions on TM dual-mode filters, the generalized TM dual-mode cavity allows the realization of both symmetric and asymmetric filtering functions, simultaneously exploiting the maximum number of finite frequency transmission zeros. The high design flexibility in terms of number and position of transmission zeros is obtained by exciting and exploiting a set of nonresonating modes. Five structure parameters are used to fully control its equivalent transversal topology. The relationship between structure parameters and filtering function realized is extensively discussed. The design of multiple cavity filters is presented along with the experimental results of a sixth-order filter having six asymmetrically located transmission zeros.",
"title": ""
},
{
"docid": "e8a69f68bc1647c69431ce88a0728777",
"text": "Contrary to popular perception, qualitative research can produce vast amounts of data. These may include verbatim notes or transcribed recordings of interviews or focus groups, jotted notes and more detailed “fieldnotes” of observational research, a diary or chronological account, and the researcher’s reflective notes made during the research. These data are not necessarily small scale: transcribing a typical single interview takes several hours and can generate 20-40 pages of single spaced text. Transcripts and notes are the raw data of the research. They provide a descriptive record of the research, but they cannot provide explanations. The researcher has to make sense of the data by sifting and interpreting them.",
"title": ""
},
{
"docid": "1f0fd314cdc4afe7b7716ca4bd681c16",
"text": "Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes beyond simple feature concatenation and learns to automatically align the two modalities, leading to enhanced representations which increase the recognition accuracy in both clean and noisy conditions. We test our strategy on the TCD-TIMIT and LRS2 datasets, designed for large vocabulary continuous speech recognition, applying three types of noise at different power ratios. We also exploit state of the art Sequence-to-Sequence architectures, showing that our method can be easily integrated. Results show relative improvements from 7% up to 30% on TCD-TIMIT over the acoustic modality alone, depending on the acoustic noise level. We anticipate that the fusion strategy can easily generalise to many other multimodal tasks which involve correlated modalities.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "3e5312f6d3c02d8df2903ea80c1bbae5",
"text": "Stroke has now become the leading cause of severe disability. Rehabilitation robots are gradually becoming popular for stroke rehabilitation to improve motor recovery, as robotic technology can assist, enhance, and further quantify rehabilitation training for stroke patients. However, most of the available rehabilitation robots are complex and involve multiple degrees-of-freedom (DOFs) causing it to be very expensive and huge in size. Rehabilitation robots should be useful but also need to be affordable and portable enabling more patients to afford and train independently at home. This paper presents a development of an affordable, portable and compact rehabilitation robot that implements different rehabilitation strategies for stroke patient to train forearm and wrist movement in an enhanced virtual reality environment with haptic feedback.",
"title": ""
},
{
"docid": "691f5f53582ceedaa51812307778b4db",
"text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers",
"title": ""
},
{
"docid": "423d15bbe1c47bc6225030307fc8e379",
"text": "In a secret sharing scheme, a datumd is broken into shadows which are shared by a set of trustees. The family {P′⊆P:P′ can reconstructd} is called the access structure of the scheme. A (k, n)-threshold scheme is a secret sharing scheme having the access structure {P′⊆P: |P′|≥k}. In this paper, by observing a simple set-theoretic property of an access structure, we propose its mathematical definition. Then we verify the definition by proving that every family satisfying the definition is realized by assigning two more shadows of a threshold scheme to trustees.",
"title": ""
},
{
"docid": "84307c2dd94ebe89c46a535b31b4b51b",
"text": "Building systems that autonomously create temporal abstractions from data is a key challenge in scaling learning and planning in reinforcement learning. One popular approach for addressing this challenge is the options framework [41]. However, only recently in [1] was a policy gradient theorem derived for online learning of general purpose options in an end to end fashion. In this work, we extend previous work on this topic that only focuses on learning a two-level hierarchy including options and primitive actions to enable learning simultaneously at multiple resolutions in time. We achieve this by considering an arbitrarily deep hierarchy of options where high level temporally extended options are composed of lower level options with finer resolutions in time. We extend results from [1] and derive policy gradient theorems for a deep hierarchy of options. Our proposed hierarchical option-critic architecture is capable of learning internal policies, termination conditions, and hierarchical compositions over options without the need for any intrinsic rewards or subgoals. Our empirical results in both discrete and continuous environments demonstrate the efficiency of our framework.",
"title": ""
},
{
"docid": "9c780c4d37326ce2a5e2838481f48456",
"text": "A maximum power point tracker has been previously developed for the single high performance triple junction solar cell for hybrid and electric vehicle applications. The maximum power point tracking (MPPT) control method is based on the incremental conductance (IncCond) but removes the need for current sensors. This paper presents the hardware implementation of the maximum power point tracker. Significant efforts have been made to reduce the size to 18 mm times 21 mm (0.71 in times 0.83 in) and the cost to close to $5 US. This allows the MPPT hardware to be integrable with a single solar cell. Precision calorimetry measurements are employed to establish the converter power loss and confirm that an efficiency of 96.2% has been achieved for the 650-mW converter with 20-kHz switching frequency. Finally, both the static and the dynamic tests are conducted to evaluate the tracking performances of the MPPT hardware. The experimental results verify a tracking efficiency higher than 95% under three different insolation levels and a power loss less than 5% of the available cell power under instantaneous step changes between three insolation levels.",
"title": ""
},
{
"docid": "6abc9ea6e1d5183e589194db8520172c",
"text": "Smart decision making at the tactical level is important for Artificial Intelligence (AI) agents to perform well in the domain of real-time strategy (RTS) games. This paper presents a Bayesian model that can be used to predict the outcomes of isolated battles, as well as predict what units are needed to defeat a given army. Model parameters are learned from simulated battles, in order to minimize the dependency on player skill. We apply our model to the game of StarCraft, with the end-goal of using the predictor as a module for making high-level combat decisions, and show that the model is capable of making accurate predictions.",
"title": ""
},
{
"docid": "3255b89b7234595e7078a012d4e62fa7",
"text": "Virtual assistants such as IFTTT and Almond support complex tasks that combine open web APIs for devices and web services. In this work, we explore semantic parsing to understand natural language commands for these tasks and their compositions. We present the ThingTalk dataset, which consists of 22,362 commands, corresponding to 2,681 distinct programs in ThingTalk, a language for compound virtual assistant tasks. To improve compositionality of multiple APIs, we propose SEQ2TT, a Seq2Seq extension using a bottom-up encoding of grammar productions for programs and a maxmargin loss. On the ThingTalk dataset, SEQ2TT obtains 84% accuracy on trained programs and 67% on unseen combinations, an improvement of 12% over a basic sequence-to-sequence model with attention.",
"title": ""
},
{
"docid": "ac2e1a27ae05819d213efe7d51d1b988",
"text": "Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.",
"title": ""
},
{
"docid": "6e198119c72a796bc0b56280503fec18",
"text": "Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs.",
"title": ""
},
{
"docid": "0cce6366df945f079dbb0b90d79b790e",
"text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.",
"title": ""
},
{
"docid": "6de3aca18d6c68f0250c8090ee042a4e",
"text": "JavaScript is widely used by web developers and the complexity of JavaScript programs has increased over the last year. Therefore, the need for program analysis for JavaScript is evident. Points-to analysis for JavaScript is to determine the set of objects to which a reference variable or an object property may point. Points-to analysis for JavaScript is a basis for further program analyses for JavaScript. It has a wide range of applications in code optimization and software engineering tools. However, points-to analysis for JavaScript has not yet been developed.\n JavaScript has dynamic features such as the runtime modification of objects through addition of properties or updating of methods. We propose a points-to analysis for JavaScript which precisely handles the dynamic features of JavaScript. Our work is the first attempt to analyze the points-to behavior of JavaScript. We evaluate the analysis on a set of JavaScript programs. We also apply the analysis to a code optimization technique to show that the analysis can be practically useful.",
"title": ""
},
{
"docid": "a3b3380940613a5fb704727e41e9907a",
"text": "Stackelberg Security Games (SSG) have been widely applied for solving real-world security problems - with a significant research emphasis on modeling attackers' behaviors to handle their bounded rationality. However, access to real-world data (used for learning an accurate behavioral model) is often limited, leading to uncertainty in attacker's behaviors while modeling. This paper therefore focuses on addressing behavioral uncertainty in SSG with the following main contributions: 1) we present a new uncertainty game model that integrates uncertainty intervals into a behavioral model to capture behavioral uncertainty, and 2) based on this game model, we propose a novel robust algorithm that approximately computes the defender's optimal strategy in the worst-case scenario of uncertainty. We show that our algorithm guarantees an additive bound on its solution quality.",
"title": ""
},
{
"docid": "5998ce035f4027c6713f20f8125ec483",
"text": "As the use of automotive radar increases, performance limitations associated with radar-to-radar interference will become more significant. In this paper, we employ tools from stochastic geometry to characterize the statistics of radar interference. Specifically, using two different models for the spatial distributions of vehicles, namely, a Poisson point process and a Bernoulli lattice process, we calculate for each case the interference statistics and obtain analytical expressions for the probability of successful range estimation. This paper shows that the regularity of the geometrical model appears to have limited effect on the interference statistics, and so it is possible to obtain tractable tight bounds for the worst case performance. A technique is proposed for designing the duty cycle for the random spectrum access, which optimizes the total performance. This analytical framework is verified using Monte Carlo simulations.",
"title": ""
},
{
"docid": "de5fd8ae40a2d078101d5bb1859f689b",
"text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.",
"title": ""
},
{
"docid": "109838175d109002e022115d84cae0fa",
"text": "We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).",
"title": ""
}
] |
scidocsrr
|
cf6f80403f06d4bb848d729b36bc4e19
|
Trajectory Planning Design Equations and Control of a 4 - axes Stationary Robotic Arm
|
[
{
"docid": "53b43126d066f5e91d7514f5da754ef3",
"text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.",
"title": ""
}
] |
[
{
"docid": "261e3c6f2826473d9128d4c763ffaa41",
"text": "Since remote sensing provides more and more sensors and techniques to accumulate data on urban regions, three-dimensional representations of these complex environments gained much interest for various applications. In order to obtain three-dimensional representations, one of the most practical ways is to generate Digital Surface Models (DSMs) using very high resolution remotely sensed images from two or more viewing directions, or by using LIDAR sensors. Due to occlusions, matching errors and interpolation techniques these DSMs do not exhibit completely steep walls, and in order to obtain real three-dimensional urban models including objects like buildings from these DSMs, advanced methods are needed. A novel approach based on building shape detection, height estimation, and rooftop reconstruction is proposed to achieve realistic three-dimensional building representations. Our automatic approach consists of three main modules as; detection of complex building shapes, understanding rooftop type, and three-dimensional building model reconstruction based on detected shape and rooftop type. Besides the development of the methodology, the goal is to investigate the applicability and accuracy which can be accomplished in this context for different stereo sensor data. We use DSMs of Munich city which are obtained from different satellite (Cartosat-1, Ikonos, WorldView-2) and airborne sensors (3K camera, HRSC, and LIDAR). The paper later focuses on a quantitative comparisons of the outputs from the different multi-view sensors for a better understanding of qualities, capabilities and possibilities for applications. Results look very promising even for the DSMs derived from satellite data.",
"title": ""
},
{
"docid": "693c29b040bb37142d95201589b24d0d",
"text": "We are overwhelmed by the response to IJEIS. This response reflects the importance of the subject of enterprise information systems in global market and enterprise environments. We have some exciting special issues forthcoming in 2006. The first two issues will feature: (i) information and knowledge based approaches to improving performance in organizations, and (ii) hard and soft modeling tools and approaches to data and information management in real life projects and systems. IJEIS encourages researchers and practitioners to share their new ideas and results in enterprise information systems design and implementation, and also share relevant technical issues related to the development of such systems. This issue of IJEIS contains five articles dealing with an approach to evaluating ERP software within the acquisition process, uncertainty in ERP-controlled manufacturing systems, a review on IT business value research , methodologies for evaluating investment in electronic data interchange, and an ERP implementation model. An overview of the papers follows. The first paper, A Three-Dimensional Approach in Evaluating ERP Software within the Acquisition Process is authored by Verville, Bernadas and Halingten. This paper is based on an extensive study of the evaluation process of the acquisition of an ERP software of four organizations. Three distinct process types and activities were found: vendor's evaluation, functional evaluation , and technical evaluation. This paper provides a perspective on evaluation and sets it apart as modality for action, whose intent is to investigate and uncover by means of specific defined evaluative activities all issues pertinent to ERP software that an organization can use in its decision to acquire a solution that will meet its needs. The use of ERP is becoming increasingly prevalent in many modern manufacturing enterprises. However, knowledge of their performance when perturbed by several significant uncertainties simultaneously is not as widespread as it should have been. Koh, Gunasekaran, Saad and Arunachalam authored Uncertainty in ERP-Controlled Manufacturing Systems. The paper presents a developmental and experimental work on modeling uncertainty within an ERP multi-product, multi-level dependent demand manufacturing planning and scheduling system in a simulation model developed using ARENA/ SIMAN. To enumerate how uncertainty af",
"title": ""
},
{
"docid": "b1c6d95b297409a7b47d8fa7e6da6831",
"text": "~I \"e have modified the original model of selective attention, which was previmtsly proposed by Fukushima, and e~tended its ability to recognize attd segment connected characters in cmwive handwriting. Although the or~¢inal model q/'sdective attention ah'ead)' /tad the abilio' to recognize and segment patterns, it did not alwa)w work well when too many patterns were presented simuhaneousl): In order to restrict the nttmher q/patterns to be processed simultaneousO; a search controller has been added to the original model. Tlw new mode/mainly processes the patterns contained in a small \"search area, \" which is mo~vd b)' the search controller A ptvliminao' ev~eriment with compltter simttlatiott has shown that this approach is promisittg. The recogttition arid segmentation q[k'haracters can be sttcces~[itl even thottgh each character itt a handwritten word changes its .shape h)\" the e[]'ect o./the charactetw",
"title": ""
},
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
},
{
"docid": "8296954ffde770f611d86773f72fb1b4",
"text": "Group and async. commit? Better I/O performance But contention unchanged It reduces buffer contention, but... Log space partitioning: by page or xct? – Impacts locality, recovery strategy Dependency tracking: before commit, T4 must persist log records written by: – itself – direct xct deps: T4 T2 – direct page deps: T4 T3 – transitive deps: T4 {T3, T2} T1 Storage is slow – T4 flushes all four logs upon commit (instead of one) Log work (20%) Log contention (46%) Other work (21%) CPU cycles: Lock manager Other contention",
"title": ""
},
{
"docid": "91e8516d2e7e1e9de918251ac694ee08",
"text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.",
"title": ""
},
{
"docid": "700191eaaaf0bdd293fc3bbd24467a32",
"text": "SMART (Semantic web information Management with automated Reasoning Tool) is an open-source project, which aims to provide intuitive tools for life scientists for represent, integrate, manage and query heterogeneous and distributed biological knowledge. SMART was designed with interoperability and extensibility in mind and uses AJAX, SVG and JSF technologies, RDF, OWL, SPARQL semantic web languages, triple stores (i.e. Jena) and DL reasoners (i.e. Pellet) for the automated reasoning. Features include semantic query composition and validation using DL reasoners, a graphical representation of the query, a mapping of DL queries to SPARQL, and the retrieval of pre-computed inferences from an RDF triple store. With a use case scenario, we illustrate how a biological scientist can intuitively query the yeast knowledge base and navigate the results. Continued development of this web-based resource for the biological semantic web will enable new information retrieval opportunities for the life sciences.",
"title": ""
},
{
"docid": "07c34b068cc1217de2e623122a22d2b0",
"text": "Rheumatoid arthritis (RA) is a bone destructive autoimmune disease. Many patients with RA recognize fluctuations of their joint synovitis according to changes of air pressure, but the correlations between them have never been addressed in large-scale association studies. To address this point we recruited large-scale assessments of RA activity in a Japanese population, and performed an association analysis. Here, a total of 23,064 assessments of RA activity from 2,131 patients were obtained from the KURAMA (Kyoto University Rheumatoid Arthritis Management Alliance) database. Detailed correlations between air pressure and joint swelling or tenderness were analyzed separately for each of the 326 patients with more than 20 assessments to regulate intra-patient correlations. Association studies were also performed for seven consecutive days to identify the strongest correlations. Standardized multiple linear regression analysis was performed to evaluate independent influences from other meteorological factors. As a result, components of composite measures for RA disease activity revealed suggestive negative associations with air pressure. The 326 patients displayed significant negative mean correlations between air pressure and swellings or the sum of swellings and tenderness (p = 0.00068 and 0.00011, respectively). Among the seven consecutive days, the most significant mean negative correlations were observed for air pressure three days before evaluations of RA synovitis (p = 1.7 × 10(-7), 0.00027, and 8.3 × 10(-8), for swellings, tenderness and the sum of them, respectively). Standardized multiple linear regression analysis revealed these associations were independent from humidity and temperature. Our findings suggest that air pressure is inversely associated with synovitis in patients with RA.",
"title": ""
},
{
"docid": "f1e0565fbc19791ed636c146a9c2dfcc",
"text": "It is well established that value stocks outperform glamour stocks, yet considerable debate exists about whether the return differential reflects compensation for risk or mispricing. Under mispricing explanations, prices of glamour (value) firms reflect systematically optimistic (pessimistic) expectations; thus, the value/glamour effect should be concentrated (absent) among firms with (without) ex ante identifiable expectation errors. Classifying firms based upon whether expectations implied by current pricing multiples are congruent with the strength of their fundamentals, we document that value/glamour returns and ex post revisions to market expectations are predictably concentrated (absent) among firms with ex ante biased (unbiased) market expectations.",
"title": ""
},
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
},
{
"docid": "a2013a7c9212829187fff9bfa42665e5",
"text": "As companies increase their efforts in retaining customers, being able to predict accurately ahead of time, whether a customer will churn in the foreseeable future is an extremely powerful tool for any marketing team. The paper describes in depth the application of Deep Learning in the problem of churn prediction. Using abstract feature vectors, that can generated on any subscription based company’s user event logs, the paper proves that through the use of the intrinsic property of Deep Neural Networks (learning secondary features in an unsupervised manner), the complete pipeline can be applied to any subscription based company with extremely good churn predictive performance. Furthermore the research documented in the paper was performed for Framed Data (a company that sells churn prediction as a service for other companies) in conjunction with the Data Science Institute at Lancaster University, UK. This paper is the intellectual property of Framed Data.",
"title": ""
},
{
"docid": "93a2d7072ab88ad77c23f7c1dc5a129c",
"text": "In recent decades, the need for efficient and effective image search from large databases has increased. In this paper, we present a novel shape matching framework based on structures common to similar shapes. After representing shapes as medial axis graphs, in which nodes show skeleton points and edges connect nearby points, we determine the critical nodes connecting or representing a shape’s different parts. By using the shortest path distance from each skeleton (node) to each of the critical nodes, we effectively retrieve shapes similar to a given query through a transportation-based distance function. To improve the effectiveness of the proposed approach, we employ a unified framework that takes advantage of the feature representation of the proposed algorithm and the classification capability of a supervised machine learning algorithm. A set of shape retrieval experiments including a comparison with several well-known approaches demonstrate the proposed algorithm’s efficacy and perturbation experiments show its robustness.",
"title": ""
},
{
"docid": "4e4e65f9ee3555f2b3ee134f3ab5ca7d",
"text": "Conventional wisdom has regarded low self-esteem as an important cause of violence, but the opposite view is theoretically viable. An interdisciplinary review of evidence about aggression, crime, and violence contradicted the view that low self-esteem is an important cause. Instead, violence appears to be most commonly a result of threatened egotism--that is, highly favorable views of self that are disputed by some person or circumstance. Inflated, unstable, or tentative beliefs in the self's superiority may be most prone to encountering threats and hence to causing violence. The mediating process may involve directing anger outward as a way of avoiding a downward revision of the self-concept.",
"title": ""
},
{
"docid": "36bee0642c30a3ecab2c9a8996084b61",
"text": "Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algorithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem which is solved in practice and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and analyse the interplay between the discrete and continuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach imposed in learning theory, and the stability convergence property used in ill-posed inverse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized least-squares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.",
"title": ""
},
{
"docid": "6c15a9ec021ec38cf65532d06472be9d",
"text": "The aim of this article is to present a case study of usage of one of the data mining methods, neural network, in knowledge discovery from databases in the banking industry. Data mining is automated process of analysing, organization or grouping a large set of data from different perspectives and summarizing it into useful information using special algorithms. Data mining can help to resolve banking problems by finding some regularity, causality and correlation to business information which are not visible at first sight because they are hidden in large amounts of data. In this paper, we used one of the data mining methods, neural network, within the software package Alyuda NeuroInteligence to predict customer churn in bank. The focus on customer churn is to determinate the customers who are at risk of leaving and analysing whether those customers are worth retaining. Neural network is statistical learning model inspired by biological neural and it is used to estimate or approximate functions that can depend on a large number of inputs which are generally unknown. Although the method itself is complicated, there are tools that enable the use of neural networks without much prior knowledge of how they operate. The results show that clients who use more bank services (products) are more loyal, so bank should focus on those clients who use less than three products, and offer them products according to their needs. Similar results are obtained for different network topologies.",
"title": ""
},
{
"docid": "fe42cf28ff020c35d3a3013bb249c7d8",
"text": "Sensors and actuators are the core components of all mechatronic systems used in a broad range of diverse applications. A relatively new and rapidly evolving area is the one of rehabilitation and assistive devices that comes to support and improve the quality of human life. Novel exoskeletons have to address many functional and cost-sensitive issues such as safety, adaptability, customization, modularity, scalability, and maintenance. Therefore, a smart variable stiffness actuator was developed. The described approach was to integrate in one modular unit a compliant actuator with all sensors and electronics required for real-time communications and control. This paper also introduces a new method to estimate and control the actuator's torques without using dedicated expensive torque sensors in conditions where the actuator's torsional stiffness can be adjusted by the user. A 6-degrees-of-freedom exoskeleton was assembled and tested using the technology described in this paper, and is introduced as a real-life case study for the mechatronic design, modularity, and integration of the proposed smart actuators, suitable for human–robot interaction. The advantages are discussed together with possible improvements and the possibility of extending the presented technology to other areas of mechatronics.",
"title": ""
},
{
"docid": "db6e3742a0413ad5f44647ab1826b796",
"text": "Endometrial stromal sarcoma is a rare tumor and has unique histopathologic features. Most tumors of this kind occur in the uterus; thus, the vagina is an extremely rare site. A 34-year-old woman presented with endometrial stromal sarcoma arising in the vagina. No correlative endometriosis was found. Because of the uncommon location, this tumor was differentiated from other more common neoplasms of the vagina, particularly embryonal rhabdomyosarcoma and other smooth muscle tumors. Although the pathogenesis of endometrial stromal tumors remains controversial, the most common theory of its origin is heterotopic Müllerian tissue such as endometriosis tissue. Primitive cells of the pelvis and retroperitoneum are an alternative possible origin for the tumor if endometriosis is not present. According to the literature, the tumor has a fairly good prognosis compared with other vaginal sarcomas. Surgery combined with adjuvant radiotherapy appears to be an adequate treatment.",
"title": ""
},
{
"docid": "80ca2b3737895e9222346109ac092637",
"text": "The common ground between figurative language and humour (in the form of jokes) is what Koestler (1964) termed the bisociation of ideas. In both jokes and metaphors, two disparate concepts are brought together, but the nature and the purpose of this conjunction is different in each case. This paper focuses on this notion of boundaries and attempts to go further by asking the question “when does a metaphor become a joke?”. More specifically, the main research questions of the paper are: (a) How do speakers use metaphor in discourse for humorous purposes? (b) What are the (metaphoric) cognitive processes that relate to the creation of humour in discourse? (c) What does the study of humour in discourse reveal about the nature of metaphoricity? This paper answers these questions by examining examples taken from a three-hour conversation, and considers how linguistic theories of humour (Raskin, 1985; Attardo and Raskin, 1991; Attardo, 1994; 2001) and cognitive theories of metaphor and blending (Lakoff and Johnson, 1980; Fauconnier and Turner, 2002) can benefit from each other. Boundaries in Humour and Metaphor The goal of this paper is to explore the relationship between metaphor (and, more generally, blending) and humour, in order to attain a better understanding of the cognitive processes that are involved or even contribute to laughter in discourse. This section will present briefly research in both areas and will identify possible common ground between the two. More specifically, the notion of boundaries will be explored in both areas. The following section explores how metaphor can be used for humorous purposes in discourse by applying relevant theories of humour and metaphor to conversational data. Linguistic theories of humour highlight the importance of duality and tension in humorous texts. Koestler (1964: 51) in discussing comic creativity notes that: The sudden bisociation of an idea or event with two habitually incompatible matrices will produce a comic effect, provided that the narrative, the semantic pipeline, carries the right kind of emotional tension. When the pipe is punctured, and our expectations are fooled, the now redundant tension gushes out in laughter, or is spilled in the gentler form of the sou-rire [my emphasis]. This oft-quoted passage introduces the basic themes and mechanisms that later were explored extensively within contemporary theories of humour: a humorous text must relate to two different and opposing in some way scenarios; this duality is not",
"title": ""
},
{
"docid": "78c54496ada5e4997c72adfeaae3e41f",
"text": "In the past decade, online music streaming services (MSS), e.g. Pandora and Spotify, experienced exponential growth. The sheer volume of music collection makes music recommendation increasingly important and the related algorithms are well-documented. In prior studies, most algorithms employed content-based model (CBM) and/or collaborative filtering (CF) [3]. The former one focuses on acoustic/signal features extracted from audio content, and the latter one investigates music rating and user listening history. Actually, MSS generated user data present significant heterogeneity. Taking user-music relationship as an example, comment, bookmark, and listening history may potentially contribute to music recommendation in very different ways. Furthermore, user and music can be implicitly related via more complex relationships, e.g., user-play-artist-perform-music. From this viewpoint, user-user, music-music or user-music relationship can be much more complex than the classical CF approach assumes. For these reasons, we model music metadata and MSS generated user data in the form of a heterogeneous graph, where 6 different types of nodes interact through 16 types of relationships. We can propose many recommendation hypotheses based on the ways users and songs are connected on this graph, in the form of meta paths. The recommendation problem, then, becomes a (supervised) random walk problem on the heterogeneous graph [2]. Unlike previous heterogeneous graph mining studies, the constructed heterogeneous graph in our case is more complex, and manually formulated meta-path based hypotheses cannot guarantee good performance. In the pilot study [2], we proposed to automatically extract all the potential meta paths within a given length on the heterogeneous graph scheme, evaluate their recommendation performance on the training data, and build a learning to rank model with the best ones. Results show that the new method can significantly enhance the recommendation performance. However, there are two problems with this approach: 1. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). WSDM 2016 February 22-25, 2016, San Francisco, CA, USA c © 2016 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-3716-8/16/02. DOI: http://dx.doi.org/10.1145/2835776.2855088 including the individually best performing meta paths in the learning to rank model neglects the dependency between features; 2. it is very time consuming to calculate graph based features. Traditional feature selection methods would only work if all feature values are readily available, which would make this recommendation approach highly inefficient. In this proposal, we attempt to address these two problems by adapting the feature selection for ranking method (FSR) proposed by Geng, Liu, Qin, and Li [1]. This feature selection method developed specifically for learning to rank tasks evaluates features based on their importance when used alone, and their similarity between each other. Applying this method on the whole set of meta-path based features would be very costly. Alternatively, we use it on sub meta paths that are shared components of multiple full meta paths. We start from sub meta paths of length=1 and only the ones selected by FSR have the chance to grow to sub meta paths of length=2. Then we repeat this process until the selected sub meta paths grow to full ones. During each step, we drop some meta paths because they contain unselected sub meta paths. Finally, we will derive a subset of the original meta paths and save time by extracting values for fewer features. In our preliminary experiment, the proposed method outperforms the original FSR algorithm in both efficiency and effectiveness.",
"title": ""
},
{
"docid": "265e9de6c65996e639fd265be170e039",
"text": "Topical crawling is a young and creative area of research that holds the promise of benefiting from several sophisticated data mining techniques. The use of classification algorithms to guide topical crawlers has been sporadically suggested in the literature. No systematic study, however, has been done on their relative merits. Using the lessons learned from our previous crawler evaluation studies, we experiment with multiple versions of different classification schemes. The crawling process is modeled as a parallel best-first search over a graph defined by the Web. The classifiers provide heuristics to the crawler thus biasing it towards certain portions of the Web graph. Our results show that Naive Bayes is a weak choice for guiding a topical crawler when compared with Support Vector Machine or Neural Network. Further, the weak performance of Naive Bayes can be partly explained by extreme skewness of posterior probabilities generated by it. We also observe that despite similar performances, different topical crawlers cover subspaces on the Web with low overlap.",
"title": ""
}
] |
scidocsrr
|
0bd8336f3987f98ed58c0bd38f1ea973
|
Ranking Wily People Who Rank Each Other
|
[
{
"docid": "8300897859310ad4ee6aff55d84f31da",
"text": "We study an important crowdsourcing setting where agents evaluate one another and, based on these evaluations, a subset of agents are selected. This setting is ubiquitous when peer review is used for distributing awards in a team, allocating funding to scientists, and selecting publications for conferences. The fundamental challenge when applying crowdsourcing in these settings is that agents may misreport their reviews of others to increase their chances of being selected. We propose a new strategyproof (impartial) mechanism called Dollar Partition that satisfies desirable axiomatic properties. We then show, using a detailed experiment with parameter values derived from target real world domains, that our mechanism performs better on average, and in the worst case, than other strategyproof mechanisms in the literature.",
"title": ""
},
{
"docid": "bd76b8e1e57f4e38618cf56f4b8d33e2",
"text": "For impartial division, each participant reports only her opinion about the fair relative shares of the other participants, and this report has no effect on her own share. If a specific division is compatible with all reports, it is implemented. We propose a natural method meeting these requirements, for a division among four or more participants. No such method exists for a division among three participants.",
"title": ""
}
] |
[
{
"docid": "b41f25d30ac88dcc1e1ba8a2a9fead33",
"text": "Due to the growing interest in data mining and the educational system, educational data mining is the emerging topic for research community. The various techniques of data mining like classification and clustering can be applied to bring out hidden knowledge from the educational data. Web video mining is retrieving the content using data mining techniques from World Wide Web. There are two approaches for web video mining using traditional image processing (signal processing) and metadata based approach. In this paper, we focus on the education data mining and precisely MOOCs which constitute a new modality of e-learning and clustering techniques. We present a methodology that can be used for mining Moocs videos using metadata as leading contribution for knowledge discovery.",
"title": ""
},
{
"docid": "42d5712d781140edbc6a35703d786e15",
"text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance",
"title": ""
},
{
"docid": "3cdd640f48c1713c3d360da00c634883",
"text": "Hate speech detection in social media texts is an important Natural language Processing task, which has several crucial applications like sentiment analysis, investigating cyber bullying and examining socio-political controversies. While relevant research has been done independently on code-mixed social media texts and hate speech detection, our work is the first attempt in detecting hate speech in HindiEnglish code-mixed social media text. In this paper, we analyze the problem of hate speech detection in code-mixed texts and present a Hindi-English code-mixed dataset consisting of tweets posted online on Twitter. The tweets are annotated with the language at word level and the class they belong to (Hate Speech or Normal Speech). We also propose a supervised classification system for detecting hate speech in the text using various character level, word level, and lexicon based features.",
"title": ""
},
{
"docid": "497fcf32281c8e9555ac975a3de45a6a",
"text": "This paper presents the framework, rules, games, controllers, and results of the first General Video Game Playing Competition, held at the IEEE Conference on Computational Intelligence and Games in 2014. The competition proposes the challenge of creating controllers for general video game play, where a single agent must be able to play many different games, some of them unknown to the participants at the time of submitting their entries. This test can be seen as an approximation of general artificial intelligence, as the amount of game-dependent heuristics needs to be severely limited. The games employed are stochastic real-time scenarios (where the time budget to provide the next action is measured in milliseconds) with different winning conditions, scoring mechanisms, sprite types, and available actions for the player. It is a responsibility of the agents to discover the mechanics of each game, the requirements to obtain a high score and the requisites to finally achieve victory. This paper describes all controllers submitted to the competition, with an in-depth description of four of them by their authors, including the winner and the runner-up entries of the contest. The paper also analyzes the performance of the different approaches submitted, and finally proposes future tracks for the competition.",
"title": ""
},
{
"docid": "a3a260159a6509670c4ac3547cfc9ef0",
"text": "The advent of near infrared imagery and it's applications in face recognition has instigated research in cross spectral (visible to near infrared) matching. Existing research has focused on extracting textural features including variants of histogram of oriented gradients. This paper focuses on studying the effectiveness of these features for cross spectral face recognition. On NIR-VIS-2.0 cross spectral face database, three HOG variants are analyzed along with dimensionality reduction approaches and linear discriminant analysis. The results demonstrate that DSIFT with subspace LDA outperforms a commercial matcher and other HOG variants by at least 15%. We also observe that histogram of oriented gradient features are able to encode similar facial features across spectrums.",
"title": ""
},
{
"docid": "cf219b9093dc55f09d067954d8049aeb",
"text": "In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.",
"title": ""
},
{
"docid": "6c8151eee3fcfaec7da724c2a6899e8f",
"text": "Classic work on interruptions by Zeigarnik showed that tasks that were interrupted were more likely to be recalled after a delay than tasks that were not interrupted. Much of the literature on interruptions has been devoted to examining this effect, although more recently interruptions have been used to choose between competing designs for interfaces to complex devices. However, none of this work looks at what makes some interruptions disruptive and some not. This series of experiments uses a novel computer-based adventure-game methodology to investigate the effects of the length of the interruption, the similarity of the interruption to the main task, and the complexity of processing demanded by the interruption. It is concluded that subjects make use of some form of nonarticulatory memory which is not affected by the length of the interruption. It is affected by processing similar material however, and by a complex mentalarithmetic task which makes large demands on working memory.",
"title": ""
},
{
"docid": "dc83a0826e509d9d4be6b4b58550b20e",
"text": "This review describes historical iodine deficiency in the U.K., gives current information on dietary sources of iodine and summarises recent evidence of iodine deficiency and its association with child neurodevelopment. Iodine is required for the production of thyroid hormones that are needed for brain development, particularly during pregnancy. Iodine deficiency is a leading cause of preventable brain damage worldwide and is associated with impaired cognitive function. Despite a global focus on the elimination of iodine deficiency, iodine is a largely overlooked nutrient in the U.K., a situation we have endeavoured to address through a series of studies. Although the U.K. has been considered iodine-sufficient for many years, there is now concern that iodine deficiency may be prevalent, particularly in pregnant women and women of childbearing age; indeed we found mild-to-moderate iodine deficiency in pregnant women in Surrey. As the major dietary source of iodine in the U.K. is milk and dairy produce, it is relevant to note that we have found the iodine concentration of organic milk to be over 40% lower than that of conventional milk. In contrast to many countries, iodised table salt is unlikely to contribute to U.K. iodine intake as we have shown that its availability is low in grocery stores. This situation is of concern as the level of U.K. iodine deficiency is such that it is associated with adverse effects on offspring neurological development; we demonstrated a higher risk of low IQ and poorer reading-accuracy scores in U.K. children born to mothers who were iodine-deficient during pregnancy. Given our findings and those of others, iodine status in the U.K. population should be monitored, particularly in vulnerable subgroups such as pregnant women and children.",
"title": ""
},
{
"docid": "7e91815398915670fadba3c60e772d14",
"text": "Online reviews are valuable resources not only for consumers to make decisions before purchase, but also for providers to get feedbacks for their services or commodities. In Aspect Based Sentiment Analysis (ABSA), it is critical to identify aspect categories and extract aspect terms from the sentences of user-generated reviews. However, the two tasks are often treated independently, even though they are closely related. Intuitively, the learned knowledge of one task should inform the other learning task. In this paper, we propose a multi-task learning model based on neural networks to solve them together. We demonstrate the improved performance of our multi-task learning model over the models trained separately on three public dataset released by SemEval work-",
"title": ""
},
{
"docid": "2e4a3f77d0b8c31600fca0f1af82feb5",
"text": "Forwarding data in scenarios where devices have sporadic connectivity is a challenge. An example scenario is a disaster area, where forwarding information generated in the incident location, like victims’ medical data, to a coordination point is critical for quick, accurate and coordinated intervention. New applications are being developed based on mobile devices and wireless opportunistic networks as a solution to destroyed or overused communication networks. But the performance of opportunistic routing methods applied to emergency scenarios is unknown today. In this paper, we compare and contrast the efficiency of the most significant opportunistic routing protocols through simulations in realistic disaster scenarios in order to show how the different characteristics of an emergency scenario impact in the behaviour of each one of them. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "40a96dfd399c27ca8b2966693732b975",
"text": "Graph matching problems of varying types are important in a wide array of application areas. A graph matching problem is a problem involving some form of comparison between graphs. Some of the many application areas of such problems include information retrieval, sub-circuit identification, chemical structure classification, and networks. Problems of efficient graph matching arise in any field that may be modeled with graphs. For example, any problem that can be modeled with binary relations between entities in the domain is such a problem. The individual entities in the problem domain become nodes in the graph. And each binary relation becomes an edge between the appropriate nodes. Although it is possible to formulate such a large array of problems as graph matching problems, it is not necessarily a good idea to do so. Graph matching is a very difficult problem. The graph isomorphism problem is to determine if there exists a one-to-one mapping from the nodes of one graph to the nodes of a second graph that preserves adjacency. Similarly, the subgraph isomorphism problem is to determine if there exists a one-to-one mapping from the",
"title": ""
},
{
"docid": "c5033a414493aa367ea9af5602471f49",
"text": "We present the Height Optimized Trie (HOT), a fast and space-efficient in-memory index structure. The core algorithmic idea of HOT is to dynamically vary the number of bits considered at each node, which enables a consistently high fanout and thereby good cache efficiency. The layout of each node is carefully engineered for compactness and fast search using SIMD instructions. Our experimental results, which use a wide variety of workloads and data sets, show that HOT outperforms other state-of-the-art index structures for string keys both in terms of search performance and memory footprint, while being competitive for integer keys. We believe that these properties make HOT highly useful as a general-purpose index structure for main-memory databases.",
"title": ""
},
{
"docid": "3655e688c58a719076f3605d5a9c9893",
"text": "The performance of a generic pedestrian detector may drop significantly when it is applied to a specific scene due to mismatch between the source dataset used to train the detector and samples in the target scene. In this paper, we investigate how to automatically train a scene-specific pedestrian detector starting with a generic detector in video surveillance without further manually labeling any samples under a novel transfer learning framework. It tackles the problem from three aspects. (1) With a graphical representation and through exploring the indegrees from target samples to source samples, the source samples are properly re-weighted. The indegrees detect the boundary between the distributions of the source dataset and the target dataset. The re-weighted source dataset better matches the target scene. (2) It takes the context information from motions, scene structures and scene geometry as the confidence scores of samples from the target scene to guide transfer learning. (3) The confidence scores propagate among samples on a graph according to the underlying visual structures of samples. All these considerations are formulated under a single objective function called Confidence-Encoded SVM. At the test stage, only the appearance-based detector is used without the context cues. The effectiveness of the proposed framework is demonstrated through experiments on two video surveillance datasets. Compared with a generic pedestrian detector, it significantly improves the detection rate by 48% and 36% at one false positive per image on the two datasets respectively.",
"title": ""
},
{
"docid": "c30e938b57863772e8c7bc0085d22f71",
"text": "Game theory is a set of tools developed to model interactions between agents with conflicting interests, and is thus well-suited to address some problems in communications systems. In this paper we present some of the basic concepts of game theory and show why it is an appropriate tool for analyzing some communication problems and providing insights into how communication systems should be designed. We then provided a detailed example in which game theory is applied to the power control problem in a",
"title": ""
},
{
"docid": "bb3cb573c5b9727d7a9b22cca0039a64",
"text": "The control objectives for information and related technology (COBIT) is a \"trusted\" open standard that is being used increasingly by a diverse range of organizations throughout the world. COBIT is arguably the most appropriate control framework to help an organization ensure alignment between use of information technology (IT) and its business goals, as it places emphasis on the business need that is satisfied by each control objective by J. Colbert, and P. Bowen (1996). This paper reports on the use of a simple classification of the published literature on COBIT, to highlight some of the features of that literature. The appropriate alignment between use of IT and the business goals of a organization is fundamental to efficient and effective IT governance. IT governance \"...is the structure of relationships and processes to develop, direct and control IS/IT resources in order to achieve the enterprise's goals\". IT governance has been recognized as a critical success factor in the achievement of corporate success by deploying information through the application of technology by N. Korac-Kakabadse and A. Kakabadse (2001). The importance of IT governance can be appreciated in light of the Gartner Group's finding that large organizations spend over 50% of their capital investment on IT by C. Koch (2002). However, research has suggested that the contribution of IT governance varies in its effectiveness. IT control frameworks are designed to promote effective IT governance. Recent pressures, including the failure of organizations such as Enron, have led to an increased focus on corporate accountability. For example, the Sarbanes-Oxley Act of 2002 introduced legislation that imposed new governance requirements by G. Coppin (2003). These and other changes have resulted in a new corporate governance model with an increased emphasis on IT governance, which goes beyond the traditional focus of corporate governance on financial aspects by R. Roussey (2003).",
"title": ""
},
{
"docid": "f60048d9803f2d3ae0178a14d7b03536",
"text": "Forking is the creation of a new software repository by copying another repository. Though forking is controversial in traditional open source software (OSS) community, it is encouraged and is a built-in feature in GitHub. Developers freely fork repositories, use codes as their own and make changes. A deep understanding of repository forking can provide important insights for OSS community and GitHub. In this paper, we explore why and how developers fork what from whom in GitHub. We collect a dataset containing 236,344 developers and 1,841,324 forks. We make surveys, and analyze programming languages and owners of forked repositories. Our main observations are: (1) Developers fork repositories to submit pull requests, fix bugs, add new features and keep copies etc. Developers find repositories to fork from various sources: search engines, external sites (e.g., Twitter, Reddit), social relationships, etc. More than 42 % of developers that we have surveyed agree that an automated recommendation tool is useful to help them pick repositories to fork, while more than 44.4 % of developers do not value a recommendation tool. Developers care about repository owners when they fork repositories. (2) A repository written in a developer’s preferred programming language is more likely to be forked. (3) Developers mostly fork repositories from creators. In comparison with unattractive repository owners, attractive repository owners have higher percentage of organizations, more followers and earlier registration in GitHub. Our results show that forking is mainly used for making contributions of original repositories, and it is beneficial for OSS community. Moreover, our results show the value of recommendation and provide important insights for GitHub to recommend repositories.",
"title": ""
},
{
"docid": "0acf9ef6e025805a76279d1c6c6c55e7",
"text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.",
"title": ""
},
{
"docid": "9868b4d1c4ab5eb92b9d8fbe2f1715a1",
"text": "The work presented in this paper focuses on the design of a novel flexure-based mechanism capable of delivering planar motion with three degrees of freedom (3-DOF). Pseudo rigid body modeling (PRBM) and kinematic analysis of the mechanism are used to predict the motion of the mechanism in the X-, Y- and θ-directions. Lever based amplification is used to enhance the displacement of the mechanism. The presented design is small and compact in size (about 142mm by 110mm). The presented 3-DOF flexure-based miniature micro/nano mechanism delivers smooth motion in X, Y and θ, with maximum displacements of 142.09 μm in X-direction, 120.36 μm in Y-direction and 6.026 mrad in θ-rotation.",
"title": ""
},
{
"docid": "33cf6c26de09c7772a529905d9fa6b5c",
"text": "Phase Change Memory (PCM) is a promising technology for building future main memory systems. A prominent characteristic of PCM is that it has write latency much higher than read latency. Servicing such slow writes causes significant contention for read requests. For our baseline PCM system, the slow writes increase the effective read latency by almost 2X, causing significant performance degradation.\n This paper alleviates the problem of slow writes by exploiting the fundamental property of PCM devices that writes are slow only in one direction (SET operation) and are almost as fast as reads in the other direction (RESET operation). Therefore, a write operation to a line in which all memory cells have been SET prior to the write, will incur much lower latency. We propose PreSET, an architectural technique that leverages this property to pro-actively SET all the bits in a given memory line well in advance of the anticipated write to that memory line. Our proposed design initiates a PreSET request for a memory line as soon as that line becomes dirty in the cache, thereby allowing a large window of time for the PreSET operation to complete. Our evaluations show that PreSET is more effective and incurs lower storage overhead than previously proposed write cancellation techniques. We also describe static and dynamic throttling schemes to limit the rate of PreSET operations. Our proposal reduces effective read latency from 982 cycles to 594 cycles and increases system performance by 34%, while improving the energy-delay-product by 25%.",
"title": ""
}
] |
scidocsrr
|
36458d622688ad4a11f8a60be6a91a0e
|
Process Control Cyber-Attacks and Labelled Datasets on S7Comm Critical Infrastructure
|
[
{
"docid": "78d88298e0b0e197f44939ee96210778",
"text": "Cyber-security research and development for SCADA is being inhibited by the lack of available SCADA attack datasets. This paper presents a modular dataset generation framework for SCADA cyber-attacks, to aid the development of attack datasets. The presented framework is based on requirements derived from related prior research, and is applicable to any standardised or proprietary SCADA protocol. We instantiate our framework and validate the requirements using a Python implementation. This paper provides experiments of the framework's usage on a state-of-the-art DNP3 critical infrastructure test-bed, thus proving framework's ability to generate SCADA cyber-attack datasets.",
"title": ""
},
{
"docid": "57d5b63c8ad062e1c15b1037e9973b28",
"text": "SCADA systems are widely used in critical infrastructure sectors, including electricity generation and distribution, oil and gas production and distribution, and water treatment and distribution. SCADA process control systems are typically isolated from the internet via firewalls. However, they may still be subject to illicit cyber penetrations and may be subject to cyber threats from disgruntled insiders. We have developed a set of command injection, data injection, and denial of service attacks which leverage the lack of authentication in many common control system communication protocols including MODBUS, DNP3, and EtherNET/IP. We used these exploits to aid in development of a neural network based intrusion detection system which monitors control system physical behavior to detect artifacts of command and response injection attacks. Finally, we present intrusion detection accuracy results for our neural network based IDS which includes input features derived from physical properties of the control system.",
"title": ""
}
] |
[
{
"docid": "15975baddd2e687d14588fcfc674bbc8",
"text": "The treatment of external genitalia trauma is diverse according to the nature of trauma and injured anatomic site. The classification of trauma is important to establish a strategy of treatment; however, to date there has been less effort to make a classification for trauma of external genitalia. The classification of external trauma in male could be established by the nature of injury mechanism or anatomic site: accidental versus self-mutilation injury and penis versus penis plus scrotum or perineum. Accidental injury covers large portion of external genitalia trauma because of high prevalence and severity of this disease. The aim of this study is to summarize the mechanism and treatment of the traumatic injury of penis. This study is the first review describing the issue.",
"title": ""
},
{
"docid": "28530d3d388edc5d214a94d70ad7f2c3",
"text": "In next generation wireless mobile networks, network virtualization will become an important key technology. In this paper, we firstly propose a resource allocation scheme for enabling efficient resource allocation in wireless network virtualization. Then, we formulate the resource allocation strategy as an optimization problem, considering not only the revenue earned by serving end users of virtual networks, but also the cost of leasing infrastructure from infrastructure providers. In addition, we develop an efficient alternating direction method of multipliers (ADMM)-based distributed virtual resource allocation algorithm in virtualized wireless networks. Simulation results are presented to show the effectiveness of the proposed scheme.",
"title": ""
},
{
"docid": "a652eb10bf8f15855f9ac1f1981dc07f",
"text": "n = 379) were jail inmates at the time of ingestion, 22.9% ( n = 124) had a history of psychosis, and 7.2% ( n = 39) were alcoholics or denture-wearing elderly subjects. Most foreign bodies passed spontaneously (75.6%; n = 410). Endoscopic removal was possible in 19.5% ( n = 106) and was not associated with any morbidity. Only 4.8% ( n = 26) required surgery. Of the latter, 30.8% ( n = 8) had long gastric FBs with no tendency for distal passage and were removed via gastrotomy; 15.4% ( n = 4) had thin, sharp FBs, causing perforation; and 53.8% ( n = 14) had FBs impacted in the ileocecal region, which were removed via appendicostomy. Conservative approach to FB ingestion is justified, although early endoscopic removal from the stomach is recommended. In cases of failure, surgical removal for gastric FBs longer than 7.0 cm is wise. Thin, sharp FBs require a high index of suspicion because they carry a higher risk for perforation. The ileocecal region is the most common site of impaction. Removal of the FB via appendicostomy is the safest option and should not be delayed more than 48 hours.",
"title": ""
},
{
"docid": "7175d7767b2fc227136863bdec145dc2",
"text": "In this letter, a tapered slot ultrawide band (UWB) Vivaldi antenna with enhanced gain having band notch characteristics in the WLAN/WiMAX band is presented. In this framework, a reference tapered slot Vivaldi antenna is first designed for UWB operation that is, 3.1–10.6 GHz using the standard procedure. The band-notch operation at 4.8 GHz is achieved with the help of especially designed complementary split ring resonator (CSRR) cell placed near the excitation point of the antenna. Further, the gain of the designed antenna is enhanced substantially with the help of anisotropic zero index metamaterial (AZIM) cells, which are optimized and positioned on the substrate in a particular fashion. In order to check the novelty of the design procedure, three distinct Vivaldi structures are fabricated and tested. Experimental data show quite good agreement with the simulated results. As the proposed antenna can minimize the electromagnetic interference (EMI) caused by the IEEE 802.11 WLAN/WiMAX standards, it can be used more efficiently in the UWB frequency band. VC 2016 Wiley Periodicals, Inc. Microwave Opt Technol Lett 58:233–238, 2016; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.29534",
"title": ""
},
{
"docid": "a2775f9d8e0dd72ca5dd4ba76b49070a",
"text": "What are the critical requirements to be considered for the security measures in Internet of Things (IoT) services? Further, how should those security resources be allocated? To provide valuable insight into these questions, this paper introduces a security assessment framework for the IoT service environment from an architectural perspective. Our proposed framework integrates fuzzy DEMATEL and fuzzy ANP to reflect dependence and feedback interrelations among security criteria and, ultimately, to weigh and prioritize them. The results, gleaned from the judgments of 38 security experts, revealed that security design should put more importance on the service layer, especially to ensure availability and trust. We believe that these results will contribute to the provision of more secure and reliable IoT services.",
"title": ""
},
{
"docid": "2332c8193181b5ad31e9424ca37b0f5a",
"text": "The ability to grasp ordinary and potentially never-seen objects is an important feature in both domestic and industrial robotics. For a system to accomplish this, it must autonomously identify grasping locations by using information from various sensors, such as Microsoft Kinect 3D camera. Despite numerous progress, significant work still remains to be done in this field. To this effect, we propose a dictionary learning and sparse representation (DLSR) framework for representing RGBD images from 3D sensors in the context of determining such good grasping locations. In contrast to previously proposed approaches that relied on sophisticated regularization or very large datasets, the derived perception system has a fast training phase and can work with small datasets. It is also theoretically founded for dealing with masked-out entries, which are common with 3D sensors. We contribute by presenting a comparative study of several DLSR approach combinations for recognizing and detecting grasp candidates on the standard Cornell dataset. Importantly, experimental results show a performance improvement of 1.69% in detection and 3.16% in recognition over current state-of-the-art convolutional neural network (CNN). Even though nowadays most popular vision-based approach is CNN, this suggests that DLSR is also a viable alternative with interesting advantages that CNN has not.",
"title": ""
},
{
"docid": "2399e1ffd634417f00273993ad0ba466",
"text": "Requirements prioritization aims at identifying the most important requirements for a software system, a crucial step when planning for system releases and deciding which requirements to implement in each release. Several prioritization methods and supporting tools have been proposed so far. How to evaluate their properties, with the aim of supporting the selection of the most appropriate method for a specific project, is considered a relevant question. In this paper, we present an empirical study aiming at evaluating two state-of-the art tool-supported requirements prioritization methods, AHP and CBRank. We focus on three measures: the ease of use, the time-consumption and the accuracy. The experiment has been conducted with 23 experienced subjects on a set of 20 requirements from a real project. Results indicate that for the first two characteristics CBRank overcomes AHP, while for the accuracy AHP performs better than CBRank, even if the resulting ranks from the two methods are very similar. The majority of the users found CBRank the ‘‘overall best”",
"title": ""
},
{
"docid": "ba324cf5ca59b193d1f4ec9df5a691fd",
"text": "The Chiron-1 user interface system demonstrates key techniques that enable a strict separation of an application from its user interface. These techniques include separating the control-flow aspects of the application and user interface: they are concurrent and may contain many threads. Chiron also separates windowing and look-and-feel issues from dialogue and abstract presentation decisions via mechanisms employing a client-server architecture. To separate application code from user interface code, user interface agents called artists are attached to instances of application abstract data types (ADTs). Operations on ADTs within the application implicitly trigger user interface activities within the artists. Multiple artists can be attached to ADTs, providing multiple views and alternative forms of access and manipulation by either a single user or by multiple users. Each artist and the application run in separate threads of control. Artists maintain the user interface by making remote calls to an abstract depiction hierarchy in the Chiron server, insulting the user interface code from the specifics of particular windowing systems and toolkits. The Chiron server and clients execute in separate processes. The client-server architecture also supports multilingual systems: mechanisms are demonstrated that support clients written in programming languages other than that of the server while nevertheless supporting object-oriented server concepts. The system has been used in several universities and research and development projects. It is available by anonymous ftp.",
"title": ""
},
{
"docid": "9a071b23eb370f053a5ecfd65f4a847d",
"text": "INTRODUCTION\nConcomitant obesity significantly impairs asthma control. Obese asthmatics show more severe symptoms and an increased use of medications.\n\n\nOBJECTIVES\nThe primary aim of the study was to identify genes that are differentially expressed in the peripheral blood of asthmatic patients with obesity, asthmatic patients with normal body mass, and obese patients without asthma. Secondly, we investigated whether the analysis of gene expression in peripheral blood may be helpful in the differential diagnosis of obese patients who present with symptoms similar to asthma.\n\n\nPATIENTS AND METHODS\nThe study group included 15 patients with asthma (9 obese and 6 normal-weight patients), while the control group-13 obese patients in whom asthma was excluded. The analysis of whole-genome expression was performed on RNA samples isolated from peripheral blood.\n\n\nRESULTS\nThe comparison of gene expression profiles between asthmatic patients with obesity and those with normal body mass revealed a significant difference in 6 genes. The comparison of the expression between controls and normal-weight patients with asthma showed a significant difference in 23 genes. The analysis of genes with a different expression revealed a group of transcripts that may be related to an increased body mass (PI3, LOC100008589, RPS6KA3, LOC441763, IFIT1, and LOC100133565). Based on gene expression results, a prediction model was constructed, which allowed to correctly classify 92% of obese controls and 89% of obese asthmatic patients, resulting in the overall accuracy of the model of 90.9%.\n\n\nCONCLUSIONS\nThe results of our study showed significant differences in gene expression between obese asthmatic patients compared with asthmatic patients with normal body mass as well as in obese patients without asthma compared with asthmatic patients with normal body mass.",
"title": ""
},
{
"docid": "b4166b57419680e348d7a8f27fbc338a",
"text": "OBJECTIVES\nTreatments of female sexual dysfunction have been largely unsuccessful because they do not address the psychological factors that underlie female sexuality. Negative self-evaluative processes interfere with the ability to attend and register physiological changes (interoceptive awareness). This study explores the effect of mindfulness meditation training on interoceptive awareness and the three categories of known barriers to healthy sexual functioning: attention, self-judgment, and clinical symptoms.\n\n\nMETHODS\nForty-four college students (30 women) participated in either a 12-week course containing a \"meditation laboratory\" or an active control course with similar content or laboratory format. Interoceptive awareness was measured by reaction time in rating physiological response to sexual stimuli. Psychological barriers were assessed with self-reported measures of mindfulness and psychological well-being.\n\n\nRESULTS\nWomen who participated in the meditation training became significantly faster at registering their physiological responses (interoceptive awareness) to sexual stimuli compared with active controls (F(1,28) = 5.45, p = .03, η(p)(2) = 0.15). Female meditators also improved their scores on attention (t = 4.42, df = 11, p = .001), self-judgment, (t = 3.1, df = 11, p = .01), and symptoms of anxiety (t = -3.17, df = 11, p = .009) and depression (t = -2.13, df = 11, p < .05). Improvements in interoceptive awareness were correlated with improvements in the psychological barriers to healthy sexual functioning (r = -0.44 for attention, r = -0.42 for self-judgment, and r = 0.49 for anxiety; all p < .05).\n\n\nCONCLUSIONS\nMindfulness-based improvements in interoceptive awareness highlight the potential of mindfulness training as a treatment of female sexual dysfunction.",
"title": ""
},
{
"docid": "527f52078b24a8d8b49f4e9411a69936",
"text": "Now-a-days Big Data have been created lot of buzz in technology world. Sentiment Analysis or opinion mining is very important application of ‘Big Data’. Sentiment analysis is used for knowing voice or response of crowd for products, services, organizations, individuals, movie reviews, issues, events, news etc... In this paper we are going to discuss about exiting methods, approaches to do sentimental analysis for unstructured data which reside on web. Currently, Sentiment Analysis concentrates for subjective statements or on subjectivity and overlook objective statements which carry sentiment(s). So, we propose new approach classify and handle subjective as well as objective statements for sentimental analysis. Keywords— Sentiment Analysis, Text Mining, Machine learning, Natural Language Processing, Big Data",
"title": ""
},
{
"docid": "f71987051ad044673c8b41709cb34df7",
"text": "The quality and the correctness of software are often the greatest concern in electronic systems. Formal verification tools can provide a guarantee that a design is free of specific flaws. This paper surveys algorithms that perform automatic static analysis of software to detect programming errors or prove their absence. The three techniques considered are static analysis with abstract domains, model checking, and bounded model checking. A short tutorial on these techniques is provided, highlighting their differences when applied to practical problems. This paper also surveys tools implementing these techniques and describes their merits and shortcomings.",
"title": ""
},
{
"docid": "23f3ab8e7bc934ebb786916a5c4c7d27",
"text": "This paper presents a Haskell library for graph processing: DeltaGraph. One unique feature of this system is that intentions to perform graph updates can be memoized in-graph in a decentralized fashion, and the propagation of these intentions within the graph can be decoupled from the realization of the updates. As a result, DeltaGraph can respond to updates in constant time and work elegantly with parallelism support. We build a Twitter-like application on top of DeltaGraph to demonstrate its effectiveness and explore parallelism and opportunistic computing optimizations.",
"title": ""
},
{
"docid": "31be3d5db7d49d1bfc58c81efec83bdc",
"text": "Electromagnetic elements such as inductance are not used in switched-capacitor converters to convert electrical power. In contrast, capacitors are used for storing and transforming the electrical power in these new topologies. Lower volume, higher power density, and more integration ability are the most important features of these kinds of converters. In this paper, the most important switched-capacitor converters topologies, which have been developed in the last decade as new topologies in power electronics, are introduced, analyzed, and compared with each other, in brief. Finally, a 100 watt double-phase half-mode resonant converter is simulated to convert 48V dc to 24 V dc for light weight electrical vehicle applications. Low output voltage ripple (0.4%), and soft switching for all power diodes and switches are achieved under the worst-case conditions.",
"title": ""
},
{
"docid": "133d850d8fc0252ad69ee178e1e523af",
"text": "In this article, we build models to predict the existence of citations among papers by formulating link prediction for 5 large-scale datasets of citation networks. The supervised machine-learning model is applied with 11 features. As a result, our learner performs very well, with the F1 values of between 0.74 and 0.82. Three features in particular, link-based Jaccard coefficient , difference in betweenness centrality , and cosine similarity of term frequency–inverse document frequency vectors, largely affect the predictions of citations.The results also indicate that different models are required for different types of research areas—research fields with a single issue or research fields with multiple issues. In the case of research fields with multiple issues, there are barriers among research fields because our results indicate that papers tend to be cited in each research field locally. Therefore, one must consider the typology of targeted research areas when building models for link prediction in citation networks.",
"title": ""
},
{
"docid": "aecacf7d1ba736899f185ee142e32522",
"text": "BACKGROUND\nLow rates of handwashing compliance among nurses are still reported in literature. Handwashing beliefs and attitudes were found to correlate and predict handwashing practices. However, such an important field is not fully explored in Jordan.\n\n\nOBJECTIVES\nThis study aims at exploring Jordanian nurses' handwashing beliefs, attitudes, and compliance and examining the predictors of their handwashing compliance.\n\n\nMETHODS\nA cross-sectional multicenter survey design was used to collect data from registered nurses and nursing assistants (N = 198) who were providing care to patients in governmental hospitals in Jordan. Data collection took place over 3 months during the period of February 2011 to April 2011 using the Handwashing Assessment Inventory.\n\n\nRESULTS\nParticipants' mean score of handwashing compliance was 74.29%. They showed positive attitudes but seemed to lack knowledge concerning handwashing. Analysis revealed a 5-predictor model, which accounted for 37.5% of the variance in nurses' handwashing compliance. Nurses' beliefs relatively had the highest prediction effects (β = .309, P < .01), followed by skin assessment (β = .290, P < .01).\n\n\nCONCLUSION\nJordanian nurses reported moderate handwashing compliance and were found to lack knowledge concerning handwashing protocols, for which education programs are recommended. This study raised the awareness regarding the importance of complying with handwashing protocols.",
"title": ""
},
{
"docid": "21f079e590e020df08d461ba78a26d65",
"text": "The aim of this study was to develop a tool to measure the knowledge of nurses on pressure ulcer prevention. PUKAT 2·0 is a revised and updated version of the Pressure Ulcer Knowledge Assessment Tool (PUKAT) developed in 2010 at Ghent University, Belgium. The updated version was developed using state-of-the-art techniques to establish evidence concerning validity and reliability. Face and content validity were determined through a Delphi procedure including both experts from the European Pressure Ulcer Advisory Panel (EPUAP) and the National Pressure Ulcer Advisory Panel (NPUAP) (n = 15). A subsequent psychometric evaluation of 342 nurses and nursing students evaluated the item difficulty, discriminating power and quality of the response alternatives. Furthermore, construct validity was established through a test-retest procedure and the known-groups technique. The content validity was good and the difficulty level moderate. The discernment was found to be excellent: all groups with a (theoretically expected) higher level of expertise had a significantly higher score than the groups with a (theoretically expected) lower level of expertise. The stability of the tool is sufficient (Intraclass Correlation Coefficient = 0·69). The PUKAT 2·0 demonstrated good psychometric properties and can be used and disseminated internationally to assess knowledge about pressure ulcer prevention.",
"title": ""
},
{
"docid": "dc169d6f01d225028cc76658323e79b3",
"text": "Adopting a primary prevention perspective, this study examines competencies with the potential to enhance well-being and performance among future workers. More specifically, the contributions of ability-based and trait models of emotional intelligence (EI), assessed through well-established measures, to indices of hedonic and eudaimonic well-being were examined for a sample of 157 Italian high school students. The Mayer-Salovey-Caruso Emotional Intelligence Test was used to assess ability-based EI, the Bar-On Emotional Intelligence Inventory and the Trait Emotional Intelligence Questionnaire were used to assess trait EI, the Positive and Negative Affect Scale and the Satisfaction With Life Scale were used to assess hedonic well-being, and the Meaningful Life Measure was used to assess eudaimonic well-being. The results highlight the contributions of trait EI in explaining both hedonic and eudaimonic well-being, after controlling for the effects of fluid intelligence and personality traits. Implications for further research and intervention regarding future workers are discussed.",
"title": ""
},
{
"docid": "8cbe0ff905a58e575f2d84e4e663a857",
"text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. is survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specically, we list and review the dierent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-ings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.",
"title": ""
}
] |
scidocsrr
|
8681b706596ec0cdc42c757d4012acac
|
Coarse-and Fine-Grained Sentiment Analysis of Social Media Text
|
[
{
"docid": "fc70a1820f838664b8b51b5adbb6b0db",
"text": "This paper presents a method for identifying an opinion with its holder and topic, given a sentence from online news media texts. We introduce an approach of exploiting the semantic structure of a sentence, anchored to an opinion bearing verb or adjective. This method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from FrameNet. We decompose our task into three phases: identifying an opinion-bearing word, labeling semantic roles related to the word in the sentence, and then finding the holder and the topic of the opinion word among the labeled semantic roles. For a broader coverage, we also employ a clustering technique to predict the most probable frame for a word which is not defined in FrameNet. Our experimental results show that our system performs significantly better than the baseline.",
"title": ""
},
{
"docid": "03b3d8220753570a6b2f21916fe4f423",
"text": "Recent systems have been developed for sentiment classification, opinion recogni tion, and opinion analysis (e.g., detect ing polarity and strength). We pursue an other aspect of opinion analysis: identi fying the sources of opinions, emotions, and sentiments. We view this problem as an information extraction task and adopt a hybrid approach that combines Con ditional Random Fields (Lafferty et al., 2001) and a variation of AutoSlog (Riloff, 1996a). While CRFs model source iden tification as a sequence tagging task, Au toSlog learns extraction patterns. Our re sults show that the combination of these two methods performs better than either one alone. The resulting system identifies opinion sources with precision and recall using a head noun matching measure, and precision and recall using an overlap measure.",
"title": ""
}
] |
[
{
"docid": "452285eb334f8b4ecc17592e53d7080e",
"text": "Fathers are taking on more childcare and household responsibilities than they used to and many non-profit and government organizations have pushed for changes in policies to support fathers. Despite this effort, little research has explored how fathers go online related to their roles as fathers. Drawing on an interview study with 37 fathers, we find that they use social media to document and archive fatherhood, learn how to be a father, and access social support. They also go online to support diverse family needs, such as single fathers' use of Reddit instead of Facebook, fathers raised by single mothers' search for role models online, and stay-at-home fathers' use of father blogs. However, fathers are constrained by privacy concerns and perceptions of judgment relating to sharing content online about their children. Drawing on theories of fatherhood, we present theoretical and design ideas for designing online spaces to better support fathers and fatherhood. We conclude with a call for a research agenda to support fathers online.",
"title": ""
},
{
"docid": "de37d1ba8d9c467b5059a02e2eb6ed6a",
"text": "Periodontal disease represents a group of oral inflammatory infections initiated by oral pathogens which exist as a complex biofilms on the tooth surface and cause destruction to tooth supporting tissues. The severity of this disease ranges from mild and reversible inflammation of the gingiva (gingivitis) to chronic destruction of connective tissues, the formation of periodontal pocket and ultimately result in loss of teeth. While human subgingival plaque harbors more than 500 bacterial species, considerable research has shown that Porphyromonas gingivalis, a Gram-negative anaerobic bacterium, is the major etiologic agent which contributes to chronic periodontitis. This black-pigmented bacterium produces a myriad of virulence factors that cause destruction to periodontal tissues either directly or indirectly by modulating the host inflammatory response. Here, this review provides an overview of P. gingivalis and how its virulence factors contribute to the pathogenesis with other microbiome consortium in oral cavity.",
"title": ""
},
{
"docid": "05a77d687230dc28697ca1751586f660",
"text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.",
"title": ""
},
{
"docid": "fa14a12ca1971416100efd8e79089428",
"text": "Some statistical and machine learning methods have been proposed to build hard drive prediction models based on the SMART attributes, and have achieved good prediction performance. However, these models were not evaluated in the way as they are used in real-world data centers. Moreover, the hard drives deteriorate gradually, but these models can not describe this gradual change precisely. This paper proposes new hard drive failure prediction models based on Classification and Regression Trees, which perform better in prediction performance as well as stability and interpretability compared with the state-of the-art model, the Back propagation artificial neural network model. Experiments demonstrate that the Classification Tree (CT) model predicts over 95% of failures at a false alarm rate (FAR) under 0.1% on a real-world dataset containing 25,792 drives. Aiming at the practical application of prediction models, we test them with different drive families, with fewer number of drives, and with different model updating strategies. The CT model still shows steady and good performance. We propose a health degree model based on Regression Tree (RT) as well, which can give the drive a health assessment rather than a simple classification result. Therefore, the approach can deal with warnings raised by the prediction model in order of their health degrees. We implement a reliability model for RAID-6 systems with proactive fault tolerance and show that our CT model can significantly improve the reliability and/or reduce construction and maintenance cost of large-scale storage systems.",
"title": ""
},
{
"docid": "2e0882f6087269769a9ca1e3d2289af9",
"text": "The demand for minority representation in video games often focuses on proving that members of marginalized groups are gamers. In turn, it is asserted that the gaming industry should focus on appealing to these players via targeted content. Being targeted as a gamer, however, does not a gamer make. Identity as a gamer intersects with other identities like gender, race, and sexuality. Negative connotations about gaming lead people to not identify as gamers, and even to not play video games. This article concludes, based on interview data, that those invested in diversity in video games must focus their attention on the construction of the medium, and not the construction of the audience as such. This shift in academic attention is necessary to develop arguments for representation in games that do not rely on marking groups as specific kinds of gaming markets via identifiers like gender, race, and sexuality.",
"title": ""
},
{
"docid": "b3cbd3d9deaed22f1b936ad141d10078",
"text": "Thirty-eight fruit salad samples including cantaloupe, citrus fruits, honeydew, pineapple, cut strawberries and mixed fruit salads, and 65 pasteurized fruit juice samples (apple, carrot, grapefruit, grape and orange juices, apple cider, and soy milk) were purchased from local supermarkets in the Washington, DC area and tested for fungal contamination. The majority of fruit salad samples (97%) were contaminated with yeasts at levels ranging from <2.0 to 9.72 log10 of colony forming units per gram (cfu/g). Frequently encountered yeasts were Pichia spp., Candida pulcherrima, C. lambica, C. sake, Rhodotorula spp., and Debaryomyces polymorphus. Low numbers of Penicillium spp. were found in pineapple salads, whereas Cladosporium spp. were present in mixed fruit and cut strawberry salads. Twenty-two per cent of the fruit juice samples tested showed fungal contamination. Yeasts were the predominant contaminants ranging from <1.0 to 6.83 log10 cfu/ml. Yeasts commonly found in fruit juices were C. lambica, C. sake, and Rhodotorula rubra. Geotrichum spp. and low numbers of Penicillium and Fusarium spp. (1.70 and 1.60 log10 cfu/ml, respectively) were present in grapefruit juice.",
"title": ""
},
{
"docid": "55903de2bf1c877fac3fdfc1a1db68fc",
"text": "UK small to medium sized enterprises (SMEs) are suffering increasing levels of cybersecurity breaches and are a major point of vulnerability in the supply chain networks in which they participate. A key factor for achieving optimal security levels within supply chains is the management and sharing of cybersecurity information associated with specific metrics. Such information sharing schemes amongst SMEs in a supply chain network, however, would give rise to a certain level of risk exposure. In response, the purpose of this paper is to assess the implications of adopting select cybersecurity metrics for information sharing in SME supply chain consortia. Thus, a set of commonly used metrics in a prototypical cybersecurity scenario were chosen and tested from a survey of 17 UK SMEs. The results were analysed in respect of two variables; namely, usefulness of implementation and willingness to share across supply chains. Consequently, we propose a Cybersecurity Information Sharing Taxonomy for identifying risk exposure categories for SMEs sharing cybersecurity information, which can be applied to developing Information Sharing Agreements (ISAs) within SME supply chain consortia.",
"title": ""
},
{
"docid": "fb89a5aa87f1458177d6a32ef25fdf3b",
"text": "The increase in population, the rapid economic growth and the rise in community living standards accelerate municipal solid waste (MSW) generation in developing cities. This problem is especially serious in Pudong New Area, Shanghai, China. The daily amount of MSW generated in Pudong was about 1.11 kg per person in 2006. According to the current population growth trend, the solid waste quantity generated will continue to increase with the city's development. In this paper, we describe a waste generation and composition analysis and provide a comprehensive review of municipal solid waste management (MSWM) in Pudong. Some of the important aspects of waste management, such as the current status of waste collection, transport and disposal in Pudong, will be illustrated. Also, the current situation will be evaluated, and its problems will be identified.",
"title": ""
},
{
"docid": "049a7164a973fb515ed033ba216ec344",
"text": "Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.",
"title": ""
},
{
"docid": "977d66be371c4f795048811fc2ac13d3",
"text": "The Proportional Resonant (PR) current controller provides gains at a certain frequency (resonant frequency) and eliminates steady state errors. Therefore, the PR controller can be successfully applied to single grid-connected PV inverter current control. On the contrary, a PI controller has steady-state errors and limited disturbance rejection capability. Compared with the Land LC filters, the LCL filter has excellent harmonic suppression capability, but the inherent resonant peak of the LCL filter may introduce instability in the whole system. Therefore, damping must be introduced to improve the control of the system. Considering the controller and the LCL filter active damping as a whole system makes the controller design method more complex. In fact, their frequency responses may affect each other. The traditional trial-and-error procedure is too time-consuming and the design process is inefficient. This paper provides a detailed analysis of the frequency response influence between the PR controller and the LCL filter regarded as a whole system. In addition, the paper presents a systematic method for designing controller parameters and the capacitor current feedback coefficient factor of LCL filter active-damping. The new method relies on meeting the stable margins of the system. Moreover, the paper also clarifies the impact of the grid on the inverter output current. Numerical simulation and a 3 kW laboratory setup assessed the feasibility and effectiveness of the proposed method. OPEN ACCESS Energies 2014, 7 3935",
"title": ""
},
{
"docid": "6c0021aebabc2eae4ba31334443357a6",
"text": "The trend of pushing deep learning from cloud to edge due to concerns of latency, bandwidth, and privacy has created demand for low-energy deep convolutional neural networks (CNNs). The single-layer classifier in [1] achieves sub-nJ operation, but is limited to moderate accuracy on low-complexity tasks (90% on MNIST). Larger CNN chips provide dataflow computing for high-complexity tasks (AlexNet) at mJ energy [2], but edge deployment remains a challenge due to off-chip DRAM access energy. This paper describes a mixed-signal binary CNN processor that performs image classification of moderate complexity (86% on CIFAR-10) and employs near-memory computing to achieve a classification energy of 3.8μJ, a 40x improvement over TrueNorth [3]. We accomplish this using (1) the BinaryNet algorithm for CNNs with weights and activations constrained to +1/−1 [4], which drastically simplifies multiplications (XNOR) and allows integrating all memory on-chip; (2) an energy-efficient switched-capacitor (SC) neuron that addresses BinaryNet's challenge of wide vector summation; (3) architectural parallelism, parameter reuse, and locality.",
"title": ""
},
{
"docid": "f57bcea5431a11cc431f76727ba81a26",
"text": "We develop a Bayesian procedure for estimation and inference for spatial models of roll call voting. This approach is extremely flexible, applicable to any legislative setting, irrespective of size, the extremism of the legislators’ voting histories, or the number of roll calls available for analysis. The model is easily extended to let other sources of information inform the analysis of roll call data, such as the number and nature of the underlying dimensions, the presence of party whipping, the determinants of legislator preferences, and the evolution of the legislative agenda; this is especially helpful since generally it is inappropriate to use estimates of extant methods (usually generated under assumptions of sincere voting) to test models embodying alternate assumptions (e.g., log-rolling, party discipline). A Bayesian approach also provides a coherent framework for estimation and inference with roll call data that eludes extant methods; moreover, via Bayesian simulation methods, it is straightforward to generate uncertainty assessments or hypothesis tests concerning any auxiliary quantity of interest or to formally compare models. In a series of examples we show how our method is easily extended to accommodate theoretically interesting models of legislative behavior. Our goal is to provide a statistical framework for combining the measurement of legislative preferences with tests of models of legislative behavior.",
"title": ""
},
{
"docid": "2a8326b355ba4a64111a581e3cf8ba3b",
"text": "There has been a growing interest in AI in the design of multiagent systems, especially in multiagent cooperative planning. In this paper, we investigate the extent to which methods from single-agent planning and learning can be applied in multiagent settings. We survey a number of different techniques from decision-theoretic planning and reinforcement learning and describe a number of interesting issues that arise with regard to coordinating the policies of individual agents. To this end, we describe multiagent Markov decision processes as a general model in which to frame this discussion. These are special n-person cooperative games in which agents share the same utility function. We discuss coordination mechanisms based on imposed conventions (or social laws) as well as learning methods for coordination. Our focus is on the decomposition of sequential decision processes so that coordination can be learned (or imposed) locally, at the level of individual states. We also discuss the use of structured problem representations and their role in the generalization of learned conventions and in approximation.",
"title": ""
},
{
"docid": "27bc95568467efccb3e6cc185e905e42",
"text": "Major studios and independent production firms (Indies) often have to select or “greenlight” a portfolio of scripts to turn into movies. Despite the huge financial risk at stake, there is currently no risk management tool they can use to aid their decisions, even though such a tool is sorely needed. In this paper, we developed a forecasting and risk management tool, based on movies scripts, to aid movie studios and production firms in their green-lighting decisions. The methodology developed can also assist outside investors if they have access to the scripts. Building upon and extending the previous literature, we extracted three levels of textual information (genre/content, bag-of-words, and semantics) from movie scripts. We then incorporate these textual variables as predictors, together with the contemplated production budget, into a BART-QL (Bayesian Additive Regression Tree for Quasi-Linear) model to obtain the posterior predictive distributions, rather than point forecasts, of the box office revenues for the corresponding movies. We demonstrate how the predictive distributions of box office revenues can potentially be used to help movie producers intelligently select their movie production portfolios based on their risk preferences, and we describe an illustrative analysis performed for an independent production firm.",
"title": ""
},
{
"docid": "8dc947fa1cba80700e4a2f88d87bc52a",
"text": "It is a field that is defined by its topic (and fundamental questions about it) rather than by a family of methods, much like urban studies or gerontology. Social informatics has been a subject of systematic analytical and critical research for the last 25 years. This body of research has developed theories and findings that are pertinent to understanding the design, development, and operation of usable information systems, including intranets, electronic forums, digital libraries and electronic journals.",
"title": ""
},
{
"docid": "a0c126480f0bce527a893853f6f3bec9",
"text": "Word problems are an established technique for teaching mathematical modeling skills in K-12 education. However, many students find word problems unconnected to their lives, artificial, and uninteresting. Most students find them much more difficult than the corresponding symbolic representations. To account for this phenomenon, an ideal pedagogy might involve an individually crafted progression of unique word problems that form a personalized plot. We propose a novel technique for automatic generation of personalized word problems. In our system, word problems are generated from general specifications using answer-set programming (ASP). The specifications include tutor requirements (properties of a mathematical model), and student requirements (personalization, characters, setting). Our system takes a logical encoding of the specification, synthesizes a word problem narrative and its mathematical model as a labeled logical plot graph, and realizes the problem in natural language. Human judges found our problems as solvable as the textbook problems, with a slightly more artificial language.",
"title": ""
},
{
"docid": "b2808bbd0fd36410cbcf700bc3625516",
"text": "Suboptimal nutrition is a leading cause of poor health. Nutrition and policy science have advanced rapidly, creating confusion yet also providing powerful opportunities to reduce the adverse health and economic impacts of poor diets. This review considers the history, new evidence, controversies, and corresponding lessons for modern dietary and policy priorities for cardiovascular diseases, obesity, and diabetes mellitus. Major identified themes include the importance of evaluating the full diversity of diet-related risk pathways, not only blood lipids or obesity; focusing on foods and overall diet patterns, rather than single isolated nutrients; recognizing the complex influences of different foods on long-term weight regulation, rather than simply counting calories; and characterizing and implementing evidence-based strategies, including policy approaches, for lifestyle change. Evidence-informed dietary priorities include increased fruits, nonstarchy vegetables, nuts, legumes, fish, vegetable oils, yogurt, and minimally processed whole grains; and fewer red meats, processed (eg, sodium-preserved) meats, and foods rich in refined grains, starch, added sugars, salt, and trans fat. More investigation is needed on the cardiometabolic effects of phenolics, dairy fat, probiotics, fermentation, coffee, tea, cocoa, eggs, specific vegetable and tropical oils, vitamin D, individual fatty acids, and diet-microbiome interactions. Little evidence to date supports the cardiometabolic relevance of other popular priorities: eg, local, organic, grass-fed, farmed/wild, or non-genetically modified. Evidence-based personalized nutrition appears to depend more on nongenetic characteristics (eg, physical activity, abdominal adiposity, gender, socioeconomic status, culture) than genetic factors. Food choices must be strongly supported by clinical behavior change efforts, health systems reforms, novel technologies, and robust policy strategies targeting economic incentives, schools and workplaces, neighborhood environments, and the food system. Scientific advances provide crucial new insights on optimal targets and best practices to reduce the burdens of diet-related cardiometabolic diseases.",
"title": ""
},
{
"docid": "49d3548babbc17cf265c60745dbea1a0",
"text": "OBJECTIVE\nTo evaluate the role of transabdominal three-dimensional (3D) ultrasound in the assessment of the fetal brain and its potential for routine neurosonographic studies.\n\n\nMETHODS\nWe studied prospectively 202 consecutive fetuses between 16 and 24 weeks' gestation. A 3D ultrasound volume of the fetal head was acquired transabdominally. The entire brain anatomy was later analyzed using the multiplanar images by a sonologist who was expert in neonatal cranial sonography. The quality of the conventional planes obtained (coronal, sagittal and axial, at different levels) and the ability of the 3D multiplanar neuroscan to visualize properly the major anatomical structures of the brain were evaluated.\n\n\nRESULTS\nAcceptable cerebral multiplanar images were obtained in 92% of the cases. The corpus callosum could be seen in 84% of the patients, the fourth ventricle in 78%, the lateral sulcus (Sylvian fissure) in 86%, the cingulate sulcus in 75%, the cerebellar hemispheres in 98%, the cerebellar vermis in 92%, the medulla oblongata in 97% and the cavum vergae in 9% of them. The thalami and the cerebellopontine cistern (cisterna magna) were identified in all cases. At or beyond 20 weeks, superior visualization (in > 90% of cases) was achieved of the cerebral fissures, the corpus callosum (97%), the supracerebellar cisterns (92%) and the third ventricle (93%). Some cerebral fissures were seen initially at 16-17 weeks.\n\n\nCONCLUSION\nMultiplanar images obtained by transabdominal 3D ultrasound provide a simple and effective approach for detailed evaluation of the fetal brain anatomy. This technique has the potential to be used in the routine fetal anomaly scan.",
"title": ""
},
{
"docid": "dc3de555216f10d84890ecb1165774ff",
"text": "Research into the visual perception of human emotion has traditionally focused on the facial expression of emotions. Recently researchers have turned to the more challenging field of emotional body language, i.e. emotion expression through body pose and motion. In this work, we approach recognition of basic emotional categories from a computational perspective. In keeping with recent computational models of the visual cortex, we construct a biologically plausible hierarchy of neural detectors, which can discriminate seven basic emotional states from static views of associated body poses. The model is evaluated against human test subjects on a recent set of stimuli manufactured for research on emotional body language.",
"title": ""
},
{
"docid": "28445e19325130be11eae6d21963489e",
"text": "Social media is often viewed as a sensor into various societal events such as disease outbreaks, protests, and elections. We describe the use of social media as a crowdsourced sensor to gain insight into ongoing cyber-attacks. Our approach detects a broad range of cyber-attacks (e.g., distributed denial of service (DDoS) attacks, data breaches, and account hijacking) in a weakly supervised manner using just a small set of seed event triggers and requires no training or labeled samples. A new query expansion strategy based on convolution kernels and dependency parses helps model semantic structure and aids in identifying key event characteristics. Through a large-scale analysis over Twitter, we demonstrate that our approach consistently identifies and encodes events, outperforming existing methods.",
"title": ""
}
] |
scidocsrr
|
413ee699bef30878753ca72c96d9a50f
|
Has the bug really been fixed?
|
[
{
"docid": "d1c69dac07439ade32a962134753ab08",
"text": "The change history of a software project contains a rich collection of code changes that record previous development experience. Changes that fix bugs are especially interesting, since they record both the old buggy code and the new fixed code. This paper presents a bug finding algorithm using bug fix memories: a project-specific bug and fix knowledge base developed by analyzing the history of bug fixes. A bug finding tool, BugMem, implements the algorithm. The approach is different from bug finding tools based on theorem proving or static model checking such as Bandera, ESC/Java, FindBugs, JLint, and PMD. Since these tools use pre-defined common bug patterns to find bugs, they do not aim to identify project-specific bugs. Bug fix memories use a learning process, so the bug patterns are project-specific, and project-specific bugs can be detected. The algorithm and tool are assessed by evaluating if real bugs and fixes in project histories can be found in the bug fix memories. Analysis of five open source projects shows that, for these projects, 19.3%-40.3% of bugs appear repeatedly in the memories, and 7.9%-15.5% of bug and fix pairs are found in memories. The results demonstrate that project-specific bug fix patterns occur frequently enough to be useful as a bug detection technique. Furthermore, for the bug and fix pairs, it is possible to both detect the bug and provide a strong suggestion for the fix. However, there is also a high false positive rate, with 20.8%-32.5% of non-bug containing changes also having patterns found in the memories. A comparison of BugMem with a bug finding tool, PMD, shows that the bug sets identified by both tools are mostly exclusive, indicating that BugMem complements other bug finding tools.",
"title": ""
}
] |
[
{
"docid": "f95e19e9fc88df498361c3cb12ae56b0",
"text": "Wearable health monitoring is an emerging technology for continuous monitoring of vital signs including the electrocardiogram (ECG). This signal is widely adopted to diagnose and assess major health risks and chronic cardiac diseases. This paper focuses on reviewing wearable ECG monitoring systems in the form of wireless, mobile and remote technologies related to older adults. Furthermore, the efficiency, user acceptability, strategies and recommendations on improving current ECG monitoring systems with an overview of the design and modelling are presented. In this paper, over 120 ECG monitoring systems were reviewed and classified into smart wearable, wireless, mobile ECG monitoring systems with related signal processing algorithms. The results of the review suggest that most research in wearable ECG monitoring systems focus on the older adults and this technology has been adopted in aged care facilitates. Moreover, it is shown that how mobile telemedicine systems have evolved and how advances in wearable wireless textile-based systems could ensure better quality of healthcare delivery. The main drawbacks of deployed ECG monitoring systems including imposed limitations on patients, short battery life, lack of user acceptability and medical professional’s feedback, and lack of security and privacy of essential data have been also discussed.",
"title": ""
},
{
"docid": "910380272b4a00626c9a6162b90416d6",
"text": "Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes’ decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, highdimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose properties not only facilitate but justify use of greedy approaches for their maximization.",
"title": ""
},
{
"docid": "2debaecdacfa8e62bb78ff8f0cba2ce4",
"text": "Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of software-engineering tasks, including structural and regression testing, dynamic execution profiling, static and dynamic slicing, and program understanding. To be applicable to programs in languages, such as Java and C++, these analysis techniques must account for the effects of exception occurrences and exception-handling constructs; failure to do so can cause the analysis techniques to compute incorrect results and thus, limit the usefulness of the applications that use them. This paper discusses the effects of exceptionhandling constructs on several analysis techniques. The paper presents techniques to construct representations for programs with explicit exception occurrences—exceptions that are raised explicitly through throw statements—and exception-handling constructs. The paper presents algorithms that use these representations to perform the desired analyses. The paper also discusses several softwareengineering applications that use these analyses. Finally, the paper describes empirical results pertaining to the occurrence of exception-handling constructs in Java programs, and their effects on some analysis tasks. Keywords— Exception handling, control-flow analysis, control-dependence analysis, data-flow analysis, program slicing, structural testing.",
"title": ""
},
{
"docid": "b6cd09d268aa8e140bef9fc7890538c3",
"text": "XML is quickly becoming the de facto standard for data exchange over the Internet. This is creating a new set of data management requirements involving XML, such as the need to store and query XML documents. Researchers have proposed using relational database systems to satisfy these requirements by devising ways to \"shred\" XML documents into relations, and translate XML queries into SQL queries over these relations. However, a key issue with such an approach, which has largely been ignored in the research literature, is how (and whether) the ordered XML data model can be efficiently supported by the unordered relational data model. This paper shows that XML's ordered data model can indeed be efficiently supported by a relational database system. This is accomplished by encoding order as a data value. We propose three order encoding methods that can be used to represent XML order in the relational data model, and also propose algorithms for translating ordered XPath expressions into SQL using these encoding methods. Finally, we report the results of an experimental study that investigates the performance of the proposed order encoding methods on a workload of ordered XML queries and updates.",
"title": ""
},
{
"docid": "d3ec3eeb5e56bdf862f12fe0d9ffe71c",
"text": "This paper will communicate preliminary findings from applied research exploring how to ensure that serious games are cost effective and engaging components of future training solutions. The applied research is part of a multimillion pound program for the Department of Trade and Industry, and involves a partnership between UK industry and academia to determine how bespoke serious games should be used to best satisfy learning needs in a range of contexts. The main objective of this project is to produce a minimum of three serious games prototypes for clients from different sectors (e.g., military, medical and business) each prototype addressing a learning need or learning outcome that helps solve a priority business problem or fulfill a specific training need. This paper will describe a development process that aims to encompass learner specifics and targeted learning outcomes in order to ensure that the serious game is successful. A framework for describing game-based learning scenarios is introduced, and an approach to the analysis that effectively profiles the learner within the learner group with respect to game-based learning is outlined. The proposed solution also takes account of relevant findings from serious games research on particular learner groups that might support the selection and specification of a game. A case study on infection control will be used to show how this approach to the analysis is being applied for a healthcare issue.",
"title": ""
},
{
"docid": "771dbdda9855595e3ad71b1a7aa5377a",
"text": "We present a system, TransProse, that automatically generates musical pieces from text. TransProse uses known relations between parameters of music such as tempo and scale, and the emotions they evoke. Further, it uses a novel mechanism to determine note sequence that captures the emotional activity in the text. The work has applications in information visualization, in creating audio-visual e-books, and in developing music apps.",
"title": ""
},
{
"docid": "8944e004d344e2fe9fe06b58ae0c07da",
"text": "virtual reality, developing techniques for synthesizing arbitrary views has become an important technical issue. Given an object’s structural model (such as a polygon or volume model), it’s relatively easy to synthesize arbitrary views. Generating a structural model of an object, however, isn’t necessarily easy. For this reason, research has been progressing on a technique called image-based modeling and rendering (IBMR) that avoids this problem. To date, researchers have performed studies on various IBMR techniques. (See the “Related Work” sidebar for more specific information.) Our work targets 3D scenes in motion. In this article, we propose a method for view-dependent layered representation of 3D dynamic scenes. Using densely arranged cameras, we’ve developed a system that can perform processing in real time from image pickup to interactive display, using video sequences instead of static images, at 10 frames per second (frames/sec). In our system, images on layers are view dependent, and we update both the shape and image of each layer in real time. This lets us use the dynamic layers as the coarse structure of the dynamic 3D scenes, which improves the quality of the synthesized images. In this sense, our prototype system may be one of the first full real-time IBMR systems. Our experimental results show that this method is useful for interactive 3D rendering of real scenes.",
"title": ""
},
{
"docid": "f638fa2d4e358f91a05fc5329d6058f0",
"text": "We present a computational framework for Theory of Mind (ToM): the human ability to make joint inferences about the unobservable beliefs and preferences underlying the observed actions of other agents. These mental state attributions can be understood as Bayesian inferences in a probabilistic generative model for rational action, or planning under uncertain and incomplete information, formalized as a Partially Observable Markov Decision Problem (POMDP). That is, we posit that ToM inferences approximately reconstruct the combination of a reward function and belief state trajectory for an agent based on observing that agent’s action sequence in a given environment. We test this POMDP model by showing human subjects the trajectories of agents moving in simple spatial environments and asking for joint inferences about the agents’ utilities and beliefs about unobserved aspects of the environment. Our model performs substantially better than two simpler variants: one in which preferences are inferred without reference to an agents’ beliefs, and another in which beliefs are inferred without reference to the agent’s dynamic observations in the environment. We find that preference inferences are substantially more robust and consistent with our model’s predictions than are belief inferences, in line with classic work showing that the ability to infer goals is more concretely grounded in visual data, develops earlier in infancy, and can be localized to specific neurons in the primate brain.",
"title": ""
},
{
"docid": "0e1d93bb8b1b2d2e3453384092f39afc",
"text": "Repetitive or prolonged head flexion posture while using a smartphone is known as one of risk factors for pain symptoms in the neck. To quantitatively assess the amount and range of head flexion of smartphone users, head forward flexion angle was measured from 18 participants when they were conducing three common smartphone tasks (text messaging, web browsing, video watching) while sitting and standing in a laboratory setting. It was found that participants maintained head flexion of 33-45° (50th percentile angle) from vertical when using the smartphone. The head flexion angle was significantly larger (p < 0.05) for text messaging than for the other tasks, and significantly larger while sitting than while standing. Study results suggest that text messaging, which is one of the most frequently used app categories of smartphone, could be a main contributing factor to the occurrence of neck pain of heavy smartphone users. Practitioner Summary: In this laboratory study, the severity of head flexion of smartphone users was quantitatively evaluated when conducting text messaging, web browsing and video watching while sitting and standing. Study results indicate that text messaging while sitting caused the largest head flexion than that of other task conditions.",
"title": ""
},
{
"docid": "e1afaed983932bc98c5b0b057d4b5ab6",
"text": "This paper presents a novel solution for the problem of building text classifier using positive documents (P) and unlabeled documents (U). Here, the unlabeled documents are mixed with positive and negative documents. This problem is also called PU-Learning. The key feature of PU-Learning is that there is no negative document for training. Recently, several approaches have been proposed for solving this problem. Most of them are based on the same idea, which builds a classifier in two steps. Each existing technique uses a different method for each step. Generally speaking, these existing approaches do not perform well when the size of P is small. In this paper, we propose a new approach aiming at improving the system when the size of P is small. This approach combines the graph-based semi-supervised learning method with the two-step method. Experiments indicate that our proposed method performs well especially when the size of P is small.",
"title": ""
},
{
"docid": "be19dab37fdd4b6170816defbc550e2e",
"text": "A new continuous transverse stub (CTS) antenna array is presented in this paper. It is built using the substrate integrated waveguide (SIW) technology and designed for beam steering applications in the millimeter waveband. The proposed CTS antenna array consists of 18 stubs that are arranged in the SIW perpendicular to the wave propagation. The performance of the proposed CTS antenna array is demonstrated through simulation and measurement results. From the experimental results, the peak gain of 11.63-16.87 dBi and maximum radiation power of 96.8% are achieved in the frequency range 27.06-36 GHz with low cross-polarization level. In addition, beam steering capability is achieved in the maximum radiation angle range varying from -43° to 3 ° depending on frequency.",
"title": ""
},
{
"docid": "8a1d0d2767a35235fa5ac70818ec92e7",
"text": "This work demonstrates two 94 GHz SPDT quarter-wave shunt switches using saturated SiGe HBTs. A new mode of operation, called reverse saturation, using the emitter at the RF output node of the switch, is utilized to take advantage of the higher emitter doping and improved isolation from the substrate. The switches were designed in a 180 nm SiGe BiCMOS technology featuring 90 nm SiGe HBTs (selective emitter shrink) with fT/fmax of 250/300+ GHz. The forward-saturated switch achieves an insertion loss and isolation at 94 GHz of 1.8 dB and 19.3 dB, respectively. The reverse-saturated switch achieves a similar isolation, but reduces the insertion loss to 1.4 dB. This result represents a 30% improvement in insertion loss in comparison to the best CMOS SPDT at 94 GHz.",
"title": ""
},
{
"docid": "77a156afb22bbecd37d0db073ef06492",
"text": "Rhonda Farrell University of Fairfax, Vienna, VA ABSTRACT While acknowledging the many benefits that cloud computing solutions bring to the world, it is important to note that recent research and studies of these technologies have identified a myriad of potential governance, risk, and compliance (GRC) issues. While industry clearly acknowledges their existence and seeks to them as much as possible, timing-wise it is still well before the legal framework has been put in place to adequately protect and adequately respond to these new and differing global challenges. This paper seeks to inform the potential cloud adopter, not only of the perceived great technological benefit, but to also bring to light the potential security, privacy, and related GRC issues which will need to be prioritized, managed, and mitigated before full implementation occurs.",
"title": ""
},
{
"docid": "edf41dbd01d4060982c2c75469bbac6b",
"text": "In this paper, we develop a design method for inclined and displaced (compound) slotted waveguide array antennas. The characteristics of a compound slot element and the design results by using an equivalent circuit are shown. The effectiveness of the designed antennas is verified through experiments.",
"title": ""
},
{
"docid": "ea77710f946e118eeed7a0240a98ba79",
"text": "Magnesium-Calcium (Mg-Ca) alloy has received considerable attention as an emerging biodegradable implant material in orthopedic fixation applications. The biodegradable Mg-Ca alloys avoid stress shielding and secondary surgery inherent with permanent metallic implant materials. They also provide sufficient mechanical strength in load carrying applications as opposed to biopolymers. However, the key issue facing a biodegradable Mg-Ca implant is the fast corrosion in the human body environment. The ability to adjust degradation rate of Mg-Ca alloys is critical for the successful development of biodegradable orthopedic implants. This paper focuses on the functions and requirements of bone implants and critical issues of current implant biomaterials. Microstructures and mechanical properties of Mg-Ca alloys, and the unique properties of novel magnesium-calcium implant materials have been reviewed. Various manufacturing techniques to process Mg-Ca based alloys have been analyzed regarding their impacts on implant performance. Corrosion performance of Mg-Ca alloys processed by different manufacturing techniques was compared. In addition, the societal and economical impacts of developing biodegradable orthopedic implants have been emphasized.",
"title": ""
},
{
"docid": "2ead9e973f2a237b604bf68284e0acf1",
"text": "Cognitive radio networks challenge the traditional wireless networking paradigm by introducing concepts firmly stemmed into the Artificial Intelligence (AI) field, i.e., learning and reasoning. This fosters optimal resource usage and management allowing a plethora of potential applications such as secondary spectrum access, cognitive wireless backbones, cognitive machine-to-machine etc. The majority of overview works in the field of cognitive radio networks deal with the notions of observation and adaptations, which are not a distinguished cognitive radio networking aspect. Therefore, this paper provides insight into the mechanisms for obtaining and inferring knowledge that clearly set apart the cognitive radio networks from other wireless solutions.",
"title": ""
},
{
"docid": "f45d8267b8ae96d043c5c6773fe6c90f",
"text": "The function of the brain is intricately woven into the fabric of time. Functions such as (1) storing and accessing past memories, (2) dealing with immediate sensorimotor needs in the present, and (3) projecting into the future for goal-directed behavior are good examples of how key brain processes are integrated into time. Moreover, it can even seem that the brain generates time (in the psychological sense, not in the physical sense) since, without the brain, a living organism cannot have the notion of past nor future. When combined with an evolutionary perspective, this seemingly straightforward idea that the brain enables the conceptualization of past and future can lead to deeper insights into the principles of brain function, including that of consciousness. In this paper, we systematically investigate, through simulated evolution of artificial neural networks, conditions for the emergence of past and future in simple neural architectures, and discuss the implications of our findings for consciousness and mind uploading.",
"title": ""
},
{
"docid": "f0c7d922be0a1cc37b76d106b6ca08ad",
"text": "AIM\nTo provide an overview of interpretive phenomenology.\n\n\nBACKGROUND\nPhenomenology is a philosophy and a research approach. As a research approach, it is used extensively in nursing and 'interpretive' phenomenology is becoming increasingly popular.\n\n\nDATA SOURCES\nOnline and manual searches of relevant books and electronic databases were undertaken.\n\n\nREVIEW METHODS\nLiterature review on papers on phenomenology, research and nursing (written in English) was undertaken.\n\n\nDISCUSSION\nA brief outline of the origins of the concept, and the influence of 'descriptive' phenomenology on the development of interpretive phenomenology is provided. Its aim, origins and philosophical basis, including the core concepts of dasein, fore-structure/pre-understanding, world view existential themes and the hermeneutic circle, are described and the influence of these concepts in phenomenological nursing research is illustrated.\n\n\nCONCLUSION\nThis paper will assist readers when deciding whether interpretive phenomenology is appropriate for their research projects.\n\n\nIMPLICATIONS FOR RESEARCH/PRACTICE\nThis paper adds to the discussion on interpretive phenomenology and helps inform readers of its use as a research methodology.",
"title": ""
},
{
"docid": "5378de08d9014988b6fd1720902b30f1",
"text": "This paper presents the simulation and experimental investigations of a printed microstrip slot antenna. It is a quarter wavelength monopole slot cut in the finite ground plane edge, and fed electromagnetically by a microstrip transmission line. It provides a wide impedance bandwidth adjustable by variation of its parameters, such as the relative permittivity and thickness of the substrate, width, and location of the slot in the ground plane, and feed and ground plane dimensions. The ground plane is small, 50 mm/spl times/80 mm, and is about the size of a typical PC wireless card. At the center frequency of 3.00 GHz, its width of 50 mm is about /spl lambda//2 and influences the slot impedance and bandwidth significantly. An impedance bandwidth (S/sub 11/=-10 dB) of up to about 60% is achieved by individually optimizing its parameters. The simulation results are confirmed experimentally. A dual complementary slot antenna configuration is also investigated for the polarization diversity.",
"title": ""
},
{
"docid": "85007af502deac21cd6477945e0578d6",
"text": "State of the art movie restoration methods either estimate motion and filter out the trajectories, or compensate the motion by an optical flow estimate and then filter out the compensated movie. Now, the motion estimation problem is ill posed. This fact is known as the aperture problem: trajectories are ambiguous since they could coincide with any promenade in the space-time isophote surface. In this paper, we try to show that, for denoising, the aperture problem can be taken advantage of. Indeed, by the aperture problem, many pixels in the neighboring frames are similar to the current pixel one wishes to denoise. Thus, denoising by an averaging process can use many more pixels than just the ones on a single trajectory. This observation leads to use for movies a recently introduced image denoising method, the NL-means algorithm. This static 3D algorithm outperforms motion compensated algorithms, as it does not lose movie details. It involves the whole movie isophote and not just a trajectory.",
"title": ""
}
] |
scidocsrr
|
e745cdf3341de90bb9b19a4739da8659
|
Game design principles in everyday fitness applications
|
[
{
"docid": "16d949f6915cbb958cb68a26c6093b6b",
"text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "1aeca45f1934d963455698879b1e53e8",
"text": "A sedentary lifestyle is a contributing factor to chronic diseases, and it is often correlated with obesity. To promote an increase in physical activity, we created a social computer game, Fish'n'Steps, which links a player’s daily foot step count to the growth and activity of an animated virtual character, a fish in a fish tank. As further encouragement, some of the players’ fish tanks included other players’ fish, thereby creating an environment of both cooperation and competition. In a fourteen-week study with nineteen participants, the game served as a catalyst for promoting exercise and for improving game players’ attitudes towards physical activity. Furthermore, although most player’s enthusiasm in the game decreased after the game’s first two weeks, analyzing the results using Prochaska's Transtheoretical Model of Behavioral Change suggests that individuals had, by that time, established new routines that led to healthier patterns of physical activity in their daily lives. Lessons learned from this study underscore the value of such games to encourage rather than provide negative reinforcement, especially when individuals are not meeting their own expectations, to foster long-term behavioral change.",
"title": ""
}
] |
[
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "928eb797289d2630ff2e701ced782a14",
"text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "70ea3e32d4928e7fd174b417ec8b6d0e",
"text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "b1453c089b5b9075a1b54e4f564f7b45",
"text": "Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crashes. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10× larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.",
"title": ""
},
{
"docid": "ad4d38ee8089a67353586abad319038f",
"text": "State-of-the-art systems of Chinese Named Entity Recognition (CNER) require large amounts of hand-crafted features and domainspecific knowledge to achieve high performance. In this paper, we apply a bidirectional LSTM-CRF neural network that utilizes both characterlevel and radical-level representations. We are the first to use characterbased BLSTM-CRF neural architecture for CNER. By contrasting the results of different variants of LSTM blocks, we find the most suitable LSTM block for CNER. We are also the first to investigate Chinese radical-level representations in BLSTM-CRF architecture and get better performance without carefully designed features. We evaluate our system on the third SIGHAN Bakeoff MSRA data set for simplfied CNER task and achieve state-of-the-art performance 90.95% F1.",
"title": ""
},
{
"docid": "c256283819014d79dd496a3183116b68",
"text": "For the 5th generation of terrestrial mobile communications, Multi-Carrier (MC) transmission based on non-orthogonal waveforms is a promising technology component compared to orthogonal frequency division multiplex (OFDM) in order to achieve higher throughput and enable flexible spectrum management. Coverage extension and service continuity can be provided considering satellites as additional components in future networks by allowing vertical handover to terrestrial radio interfaces. In this paper, the properties of Filter Bank Multicarrier (FBMC) as potential MC transmission scheme is discussed taking into account the requirements for the satellite-specific PHY-Layer like non-linear distortions due to High Power Amplifiers (HPAs). The performance for specific FBMC configurations is analyzed in terms of peak-to-average power ratio (PAPR), computational complexity, non-linear distortions as well as carrier frequency offsets sensitivity (CFOs). Even though FBMC and OFDM have similar PAPR and suffer comparable spectral regrowth at the output of the non linear amplifier, simulations on link level show that FBMC still outperforms OFDM in terms of CFO sensitivity and symbol error rate in the presence of non-linear distortions.",
"title": ""
},
{
"docid": "c2f807e336be1b8d918d716c07668ae1",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "7963adab39b58ab0334b8eef4149c59c",
"text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.",
"title": ""
},
{
"docid": "179d8f41102862710595671e5a819d70",
"text": "Detecting changes in time series data is an important data analysis task with application in various scientific domains. In this paper, we propose a novel approach to address the problem of change detection in time series data, which can find both the amplitude and degree of changes. Our approach is based on wavelet footprints proposed originally by the signal processing community for signal compression. We, however, exploit the properties of footprints to efficiently capture discontinuities in a signal. We show that transforming time series data using footprint basis up to degree D generates nonzero coefficients only at the change points with degree up to D. Exploiting this property, we propose a novel change detection query processing scheme which employs footprint-transformed data to identify change points, their amplitudes, and degrees of change efficiently and accurately. We also present two methods for exact and approximate transformation of data. Our analytical and empirical results with both synthetic and real-world data show that our approach outperforms the best known change detection approach in terms of both performance and accuracy. Furthermore, unlike the state of the art approaches, our query response time is independent from the number of change points in the data and the user-defined change threshold.",
"title": ""
},
{
"docid": "c59aaad99023e5c6898243db208a4c3c",
"text": "This paper presents a method for automated vessel segmentation in retinal images. For each pixel in the field of view of the image, a 41-D feature vector is constructed, encoding information on the local intensity structure, spatial properties, and geometry at multiple scales. An AdaBoost classifier is trained on 789 914 gold standard examples of vessel and nonvessel pixels, then used for classifying previously unseen images. The algorithm was tested on the public digital retinal images for vessel extraction (DRIVE) set, frequently used in the literature and consisting of 40 manually labeled images with gold standard. Results were compared experimentally with those of eight algorithms as well as the additional manual segmentation provided by DRIVE. Training was conducted confined to the dedicated training set from the DRIVE database, and feature-based AdaBoost classifier (FABC) was tested on the 20 images from the test set. FABC achieved an area under the receiver operating characteristic (ROC) curve of 0.9561, in line with state-of-the-art approaches, but outperforming their accuracy (0.9597 versus 0.9473 for the nearest performer).",
"title": ""
},
{
"docid": "e11b4a08fc864112d4f68db1ea9703e9",
"text": "Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8c2e69380cebdd6affd43c6bfed2fc51",
"text": "A fundamental property of many plasma-membrane proteins is their association with the underlying cytoskeleton to determine cell shape, and to participate in adhesion, motility and other plasma-membrane processes, including endocytosis and exocytosis. The ezrin–radixin–moesin (ERM) proteins are crucial components that provide a regulated linkage between membrane proteins and the cortical cytoskeleton, and also participate in signal-transduction pathways. The closely related tumour suppressor merlin shares many properties with ERM proteins, yet also provides a distinct and essential function.",
"title": ""
},
{
"docid": "a1046f5282cf4057fd143fdce79c6990",
"text": "Rheumatoid arthritis is a multisystem disease with underlying immune mechanisms. Osteoarthritis is a debilitating, progressive disease of diarthrodial joints associated with the aging process. Although much is known about the pathogenesis of rheumatoid arthritis and osteoarthritis, our understanding of some immunologic changes remains incomplete. This study tries to examine the numeric changes in the T cell subsets and the alterations in the levels of some cytokines and adhesion molecules in these lesions. To accomplish this goal, peripheral blood and synovial fluid samples were obtained from 24 patients with rheumatoid arthritis, 15 patients with osteoarthritis and six healthy controls. The counts of CD4 + and CD8 + T lymphocytes were examined using flow cytometry. The levels of some cytokines (TNF-α, IL1-β, IL-10, and IL-17) and a soluble intercellular adhesion molecule-1 (sICAM-1) were measured in the sera and synovial fluids using enzyme linked immunosorbant assay. We found some variations in the counts of T cell subsets, the levels of cytokines and sICAM-1 adhesion molecule between the healthy controls and the patients with arthritis. High levels of IL-1β, IL-10, IL-17 and TNF-α (in the serum and synovial fluid) were observed in arthritis compared to the healthy controls. In rheumatoid arthritis, a high serum level of sICAM-1 was found compared to its level in the synovial fluid. A high CD4+/CD8+ T cell ratio was found in the blood of the patients with rheumatoid arthritis. In rheumatoid arthritis, the cytokine levels correlated positively with some clinicopathologic features. To conclude, the development of rheumatoid arthritis and osteoarthritis is associated with alteration of the levels of some cytokines. The assessment of these immunologic changes may have potential prognostic roles.",
"title": ""
},
{
"docid": "15e034d722778575b43394b968be19ad",
"text": "Elections are contests for the highest stakes in national politics and the electoral system is a set of predetermined rules for conducting elections and determining their outcome. Thus defined, the electoral system is distinguishable from the actual conduct of elections as well as from the wider conditions surrounding the electoral contest, such as the state of civil liberties, restraints on the opposition and access to the mass media. While all these aspects are of obvious importance to free and fair elections, the main interest of this study is the electoral system.",
"title": ""
},
{
"docid": "77b78ec70f390289424cade3850fc098",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "11a1c92620d58100194b735bfc18c695",
"text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.",
"title": ""
},
{
"docid": "02469f669769f5c9e2a9dc49cee20862",
"text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.",
"title": ""
},
{
"docid": "24e1a6f966594d4230089fc433e38ce6",
"text": "The need for omnidirectional antennas for wireless applications has increased considerably. The antennas are used in a variety of bands anywhere from 1.7 to 2.5 GHz, in different configurations which mainly differ in gain. The omnidirectionality is mostly obtained using back-to-back elements or simply using dipoles in different collinear-array configurations. The antenna proposed in this paper is a patch which was built in a cylindrical geometry rather than a planar one, and which generates an omnidirectional pattern in the H-plane.",
"title": ""
}
] |
scidocsrr
|
6468ad0ba7effeeb5f870e355139ca48
|
Linked Stream Data Processing
|
[
{
"docid": "24da291ca2590eb614f94f8a910e200d",
"text": "CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language contains an appropriate set of constructs for data stream processing.",
"title": ""
}
] |
[
{
"docid": "8fc89fce21bd4f8dced2265b9a8cdfe7",
"text": "With the rapid development of 3GPP and its related techniques, evaluation of system level performance is in great need. However, LTE system level simulator is secured as commercial secrets in most 3GPP members. In this paper, we introduce our Matlab-based LTE system level simulator according to 3GPP specifications and related proposals. We mainly focus on channel model and physical abstract of transmission. Brief introduction of every part is given and physical concept and analysis are given.",
"title": ""
},
{
"docid": "ad860674746dcf04156b3576174a9120",
"text": "Predicting the popularity dynamics of Twitter hashtags has a broad spectrum of applications. Existing works have primarily focused on modeling the popularity of individual tweets rather than the underlying hashtags. As a result, they fail to consider several realistic factors contributing to hashtag popularity. In this paper, we propose Large Margin Point Process (LMPP), a probabilistic framework that integrates hashtag-tweet influence and hashtaghashtag competitions, the two factors which play important roles in hashtag propagation. Furthermore, while considering the hashtag competitions, LMPP looks into the variations of popularity rankings of the competing hashtags across time. Extensive experiments on seven real datasets demonstrate that LMPP outperforms existing popularity prediction approaches by a significant margin. Additionally, LMPP can accurately predict the relative rankings of competing hashtags, offering additional advantage over the state-of-the-art baselines.",
"title": ""
},
{
"docid": "b7956722389df722029b005d0f7566a2",
"text": "Social media platforms such as Twitter are becoming increasingly mainstream which provides valuable user-generated information by publishing and sharing contents. Identifying interesting and useful contents from large text-streams is a crucial issue in social media because many users struggle with information overload. Retweeting as a forwarding function plays an important role in information propagation where the retweet counts simply reflect a tweet's popularity. However, the main reason for retweets may be limited to personal interests and satisfactions. In this paper, we use a topic identification as a proxy to understand a large number of tweets and to score the interestingness of an individual tweet based on its latent topics. Our assumption is that fascinating topics generate contents that may be of potential interest to a wide audience. We propose a novel topic model called Trend Sensitive-Latent Dirichlet Allocation (TS-LDA) that can efficiently extract latent topics from contents by modeling temporal trends on Twitter over time. The experimental results on real world data from Twitter demonstrate that our proposed method outperforms several other baseline methods. With the rise of the Internet, blogs, and mobile devices, social media has also evolved into an information provider by publishing and sharing user-generated contents. By analyzing textual data which represents the thoughts and communication between users, it is possible to understand the public needs and concerns about what constitutes valuable information from an academic, marketing , and policy-making perspective. Twitter (http://twitter.com) is one of the social media platforms that enables its users to generate and consume useful information about issues and trends from text streams in real-time. Twitter and its 500 million registered users produce over 340 million tweets, which are text-based messages of up to 140 characters, per day 1. Also, users subscribe to other users in order to view their followers' relationships and timelines which show tweets in reverse chronological order. Although tweets may contain valuable information, many do not and are not interesting to users. A large number of tweets can overwhelm users when they check their Twitter timeline. Thus, finding and recommending tweets that are of potential interest to users from a large volume of tweets that is accumulated in real-time is a crucial but challenging task. A simple but effective way to solve these problems is to use the number of retweets. A retweet is a function that allows a user to re-post another user's tweet and other information such …",
"title": ""
},
{
"docid": "cb2e602af2467b3d8ad7abdd98e6ddfd",
"text": "The ephemeral content popularity seen with many content delivery applications can make indiscriminate on-demand caching in edge networks highly inefficient, since many of the content items that are added to the cache will not be requested again from that network. In this paper, we address the problem of designing and evaluating more selective edge-network caching policies. The need for such policies is demonstrated through an analysis of a dataset recording YouTube video requests from users on an edge network over a 20-month period. We then develop a novel workload modelling approach for such applications and apply it to study the performance of alternative edge caching policies, including indiscriminate caching and <italic>cache on <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math> <alternatives><inline-graphic xlink:href=\"carlsson-ieq1-2614805.gif\"/></alternatives></inline-formula></italic>th <italic>request</italic> for different <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math><alternatives> <inline-graphic xlink:href=\"carlsson-ieq2-2614805.gif\"/></alternatives></inline-formula>. The latter policies are found able to greatly reduce the fraction of the requested items that are inserted into the cache, at the cost of only modest increases in cache miss rate. Finally, we quantify and explore the potential room for improvement from use of other possible predictors of further requests. We find that although room for substantial improvement exists when comparing performance to that of a perfect “oracle” policy, such improvements are unlikely to be achievable in practice.",
"title": ""
},
{
"docid": "edcdae3f9da761cedd52273ccd850520",
"text": "Extracting information from Web pages requires the ability to work at Web scale in terms of the number of documents, the number of domains and domain complexity. Recent approaches have used existing knowledge bases to learn to extract information with promising results. In this paper we propose the use of distant supervision for relation extraction from the Web. Distant supervision is a method which uses background information from the Linking Open Data cloud to automatically label sentences with relations to create training data for relation classifiers. Although the method is promising, existing approaches are still not suitable for Web extraction as they suffer from three main issues: data sparsity, noise and lexical ambiguity. Our approach reduces the impact of data sparsity by making entity recognition tools more robust across domains, as well as extracting relations across sentence boundaries. We reduce the noise caused by lexical ambiguity by employing statistical methods to strategically select training data. Our experiments show that using a more robust entity recognition approach and expanding the scope of relation extraction results in about 8 times the number of extractions, and that strategically selecting training data can result in an error reduction of about 30%.",
"title": ""
},
{
"docid": "7acfd4b984ea4ce59f95221463c02551",
"text": "An autopilot system includes several modules, and the software architecture has a variety of programs. As we all know, it is necessary that there exists one brand with a compatible sensor system till now, owing to complexity and variety of sensors before. In this paper, we apply (Robot Operating System) ROS-based distributed architecture. Deep learning methods also adopted by perception modules. Experimental results demonstrate that the system can reduce the dependence on the hardware effectively, and the sensor involved is convenient to achieve well the expected functionalities. The system adapts well to some specific driving scenes, relatively fixed and simple driving environment, such as the inner factories, bus lines, parks, highways, etc. This paper presents the case study of autopilot system based on ROS and deep learning, especially convolution neural network (CNN), from the perspective of system implementation. And we also introduce the algorithm and realization process including the core module of perception, decision, control and system management emphatically.",
"title": ""
},
{
"docid": "e1095273f4d65e31ea53d068c3dee348",
"text": "We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the /spl lscr//sub 1/-norm. A number of recent theoretical results on sparsifying properties of /spl lscr//sub 1/ penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits super-resolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a second-order cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Crame/spl acute/r-Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization.",
"title": ""
},
{
"docid": "a959b14468625cb7692de99a986937c4",
"text": "In this paper, we describe a novel method for searching and comparing 3D objects. The method encodes the geometric and topological information in the form of a skeletal graph and uses graph matching techniques to match the skeletons and to compare them. The skeletal graphs can be manually annotated to refine or restructure the search. This helps in choosing between a topological similarity and a geometric (shape) similarity. A feature of skeletal matching is the ability to perform part-matching, and its inherent intuitiveness, which helps in defining the search and in visualizing the results. Also, the matching results, which are presented in a per-node basis can be used for driving a number of registration algorithms, most of which require a good initial guess to perform registration. In this paper, we also describe a visualization tool to aid in the selection and specification of the matched objects.",
"title": ""
},
{
"docid": "22e559b9536b375ded6516ceb93652ef",
"text": "In this paper we explore the linguistic components of toxic behavior by using crowdsourced data from over 590 thousand cases of accused toxic players in a popular match-based competition game, League of Legends. We perform a series of linguistic analyses to gain a deeper understanding of the role communication plays in the expression of toxic behavior. We characterize linguistic behavior of toxic players and compare it with that of typical players in an online competition game. We also find empirical support describing how a player transitions from typical to toxic behavior. Our findings can be helpful to automatically detect and warn players who may become toxic and thus insulate potential victims from toxic playing in advance.",
"title": ""
},
{
"docid": "bc4d9587ba33464d74302045336ddc38",
"text": "Deep learning is a popular technique in modern online and offline services. Deep neural network based learning systems have made groundbreaking progress in model size, training and inference speed, and expressive power in recent years, but to tailor the model to specific problems and exploit data and problem structures is still an ongoing research topic. We look into two types of deep ‘‘multi-’’ objective learning problems: multi-view learning, referring to learning from data represented by multiple distinct feature sets, and multi-label learning, referring to learning from data instances belonging to multiple class labels that are not mutually exclusive. Research endeavors of both problems attempt to base on existing successful deep architectures and make changes of layers, regularization terms or even build hybrid systems to meet the problem constraints. In this report we first explain the original artificial neural network (ANN) with the backpropagation learning algorithm, and also its deep variants, e.g. deep belief network (DBN), convolutional neural network (CNN) and recurrent neural network (RNN). Next we present a survey of some multi-view and multi-label learning frameworks based on deep neural networks. At last we introduce some applications of deep multi-view and multi-label learning, including e-commerce item categorization, deep semantic hashing, dense image captioning, and our preliminary work on x-ray scattering image classification.",
"title": ""
},
{
"docid": "6c2b19b2888d00fccb1eae37352d653d",
"text": "Between June 1985 and January 1987, the Therac-25 medical electron accelerator was involved in six massive radiation overdoses. As a result, several people died and others were seriously injured. A detailed investigation of the factors involved in the software-related overdoses and attempts by users, manufacturers, and government agencies to deal with the accidents is presented. The authors demonstrate the complex nature of accidents and the need to investigate all aspects of system development and operation in order to prevent future accidents. The authors also present some lessons learned in terms of system engineering, software engineering, and government regulation of safety-critical systems containing software components.<<ETX>>",
"title": ""
},
{
"docid": "121a388391c12de1329e74fdeebdaf10",
"text": "In this paper, we present the first longitudinal measurement study of the underground ecosystem fueling credential theft and assess the risk it poses to millions of users. Over the course of March, 2016--March, 2017, we identify 788,000 potential victims of off-the-shelf keyloggers; 12.4 million potential victims of phishing kits; and 1.9 billion usernames and passwords exposed via data breaches and traded on blackmarket forums. Using this dataset, we explore to what degree the stolen passwords---which originate from thousands of online services---enable an attacker to obtain a victim's valid email credentials---and thus complete control of their online identity due to transitive trust. Drawing upon Google as a case study, we find 7--25% of exposed passwords match a victim's Google account. For these accounts, we show how hardening authentication mechanisms to include additional risk signals such as a user's historical geolocations and device profiles helps to mitigate the risk of hijacking. Beyond these risk metrics, we delve into the global reach of the miscreants involved in credential theft and the blackhat tools they rely on. We observe a remarkable lack of external pressure on bad actors, with phishing kit playbooks and keylogger capabilities remaining largely unchanged since the mid-2000s.",
"title": ""
},
{
"docid": "3c3980cb427c2630016f26f18cbd4ab9",
"text": "MOS (mean opinion score) subjective quality studies are used to evaluate many signal processing methods. Since laboratory quality studies are time consuming and expensive, researchers often run small studies with less statistical significance or use objective measures which only approximate human perception. We propose a cost-effective and convenient measure called crowdMOS, obtained by having internet users participate in a MOS-like listening study. Workers listen and rate sentences at their leisure, using their own hardware, in an environment of their choice. Since these individuals cannot be supervised, we propose methods for detecting and discarding inaccurate scores. To automate crowdMOS testing, we offer a set of freely distributable, open-source tools for Amazon Mechanical Turk, a platform designed to facilitate crowdsourcing. These tools implement the MOS testing methodology described in this paper, providing researchers with a user-friendly means of performing subjective quality evaluations without the overhead associated with laboratory studies. Finally, we demonstrate the use of crowdMOS using data from the Blizzard text-to-speech competition, showing that it delivers accurate and repeatable results.",
"title": ""
},
{
"docid": "2fe45390c2e54c72f6575e291fd2db94",
"text": "Green start-ups contribute towards a transition to a more sustainable economy by developing sustainable and environmentally friendly innovation and bringing it to the market. Due to specific product/service characteristics, entrepreneurial motivation and company strategies that might differ from that of other start-ups, these companies might struggle even more than usual with access to finance in the early stages. This conceptual paper seeks to explain these challenges through the theoretical lenses of entrepreneurial finance and behavioural finance. While entrepreneurial finance theory contributes to a partial understanding of green start-up finance, behavioural finance is able to solve a remaining explanatory deficit produced by entrepreneurial finance theory. Although some behavioural finance theorists are suggesting that the current understanding of economic rationality underlying behavioural finance research is inadequate, most scholars have not yet challenged these assumptions, which constrict a comprehensive and realistic description of the reality of entrepreneurial finance in green start-ups. The aim of the paper is thus, first, to explore the specifics of entrepreneurial finance in green start-ups and, second, to demonstrate the need for a more up-to-date conception of rationality in behavioural finance theory in order to enable realistic empirical research in this field.",
"title": ""
},
{
"docid": "a4fdd4d5a489fb909fc808ad9d924f76",
"text": "Analyzing and explaining relationships between entities in a knowledge graph is a fundamental problem with many applications. Prior work has been limited to extracting the most informative subgraph connecting two entities of interest. This paper extends and generalizes the state of the art by considering the relationships between two sets of entities given at query time. Our method, coined ESPRESSO, explains the connection between these sets in terms of a small number of relatedness cores: dense sub-graphs that have strong relations with both query sets. The intuition for this model is that the cores correspond to key events in which entities from both sets play a major role. For example, to explain the relationships between US politicians and European politicians, our method identifies events like the PRISM scandal and the Syrian Civil War as relatedness cores. Computing cores of bounded size is NP-hard. This paper presents efficient approximation algorithms. Our experiments with real-life knowledge graphs demonstrate the practical viability of our approach and, through user studies, the superior output quality compared to state-of-the-art baselines.",
"title": ""
},
{
"docid": "a9015698a5df36a2557b97838e6e05f9",
"text": "The evaluation of whole-sentence semantic structures plays an important role in semantic parsing and large-scale semantic structure annotation. However, there is no widely-used metric to evaluate wholesentence semantic structures. In this paper, we present smatch, a metric that calculates the degree of overlap between two semantic feature structures. We give an efficient algorithm to compute the metric and show the results of an inter-annotator agreement study.",
"title": ""
},
{
"docid": "1c0eaeea7e1bfc777bb6e391eb190b59",
"text": "We review machine learning (ML)-based optical performance monitoring (OPM) techniques in optical communications. Recent applications of ML-assisted OPM in different aspects of fiber-optic networking including cognitive fault detection and management, network equipment failure prediction, and dynamic planning and optimization of software-defined networks are also discussed.",
"title": ""
},
{
"docid": "1164e5b54ce970b55cf65cca0a1fbcb1",
"text": "We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hook-placed version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called TAHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of TAHOE by using it with the LSM implementation of security-enhanced Linux (selinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well.",
"title": ""
},
{
"docid": "13685fa8e74d57d05d5bce5b1d3d4c93",
"text": "Children left behind by parents who are overseas Filipino workers (OFW) benefit from parental migration because their financial status improves. However, OFW families might emphasize the economic benefits to compensate for their separation, which might lead to materialism among children left behind. Previous research indicates that materialism is associated with lower well-being. The theory is that materialism focuses attention on comparing one's possessions to others, making one constantly dissatisfied and wanting more. Research also suggests that gratitude mediates this link, with the focus on acquiring more possessions that make one less grateful for current possessions. This study explores the links between materialism, gratitude, and well-being among 129 adolescent children of OFWs. The participants completed measures of materialism, gratitude, and well-being (life satisfaction, self-esteem, positive and negative affect). Results showed that gratitude mediated the negative relationship between materialism and well-being (and its positive relationship with negative affect). Children of OFWs who have strong materialist orientation seek well-being from possessions they do not have and might find it difficult to be grateful of their situation, contributing to lower well-being. The findings provide further evidence for the mediated relationship between materialism and well-being in a population that has not been previously studied in the related literature. The findings also point to two possible targets for psychosocial interventions for families and children of OFWs.",
"title": ""
}
] |
scidocsrr
|
cdc0255545fed60d1857d1ca046a8f60
|
Solid-State Thermal Management for Lithium-Ion EV Batteries
|
[
{
"docid": "d1ba66a0c84fccad40d63a2bf7f5dd54",
"text": "Thermal management of batteries in electric vehicles (EVs) and hybrid electric vehicles (HEVs) is essential for effective operation in all climates. This has been recognized in the design of battery modules and packs for pre-production prototype or production EVs and HEVs. Designs are evolving and various issues are being addressed. There are trade-offs between performance, functionality, volume, mass, cost, maintenance, and safety. In this paper, we will review some of the issues and associated solutions for battery thermal management and what information is needed for proper design of battery management systems. We will discuss such topics as active cooling versus passive cooling, liquid cooling versus air cooling, cooling and heating versus cooling only systems, and relative needs of thermal management for VRLA, NiMH, and Li-Ion batteries.",
"title": ""
}
] |
[
{
"docid": "7e848e98909c69378f624ce7db31dbfa",
"text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.",
"title": ""
},
{
"docid": "da5362ac9f2a8d4e7ea4126797da6d5f",
"text": "Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent overfitting in training deep models. To understand how our models “translate” image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset.",
"title": ""
},
{
"docid": "db1d5903d2d49d995f5d3b6dd0681323",
"text": "Diffusion tensor imaging (DTI) is an exciting new MRI modality that can reveal detailed anatomy of the white matter. DTI also allows us to approximate the 3D trajectories of major white matter bundles. By combining the identified tract coordinates with various types of MR parameter maps, such as T2 and diffusion properties, we can perform tract-specific analysis of these parameters. Unfortunately, 3D tract reconstruction is marred by noise, partial volume effects, and complicated axonal structures. Furthermore, changes in diffusion anisotropy under pathological conditions could alter the results of 3D tract reconstruction. In this study, we created a white matter parcellation atlas based on probabilistic maps of 11 major white matter tracts derived from the DTI data from 28 normal subjects. Using these probabilistic maps, automated tract-specific quantification of fractional anisotropy and mean diffusivity were performed. Excellent correlation was found between the automated and the individual tractography-based results. This tool allows efficient initial screening of the status of multiple white matter tracts.",
"title": ""
},
{
"docid": "86aca69fa9d46e27a26c586962d9309f",
"text": "FX&MM MAY ISSUE 2010 To subscribe online visit: www.fx-mm.com REVERSE FACTORING – BENEFITS FOR ALL A growing number of transaction banks are implementing supplier finance programmes for their large credit-worthy customers who wish to support their supply chain partners. Reverse factoring is the most popular model, enabling banks to provide suppliers with finance at a lower cost than they would normally achieve through direct credit facilities. The credit arbitrage is achieved by the bank securing an undertaking from the buyer (who has a higher credit rating than the suppliers) to settle all invoices at maturity. By financing the buyer’s approved payables, the bank mitigates transaction and fraud risk. In addition to the lower borrowing costs and the off balance sheet treatment of these receivables purchase programmes, a further attraction for suppliers invoicing in foreign currencies is that by taking early payment they protect themselves against foreign exchange fluctuations. In return, the buyer ensures a more stable and robust supply chain, can choose to negotiate lower costs of goods and extend Days Payable Outstanding, improving working capital. Given the compelling benefits of reverse factoring, the market challenge is to drive these new programmes into mainstream acceptance.",
"title": ""
},
{
"docid": "1171b827d9057796a0dccc86ae414ea1",
"text": "The diffusion of new digital technologies renders digital transformation relevant for nearly every industry. Therefore, the maturity of firms in mastering this fundamental organizational change is increasingly discussed in practice-oriented literature. These studies, however, suffer from some shortcomings. Most importantly, digital maturity is typically described along a linear scale, thus assuming that all firms do and need to proceed through the same path. We challenge this assumption and derive a more differentiated classification scheme based on a comprehensive literature review as well as an exploratory analysis of a survey on digital transformation amongst 327 managers. Based on these findings we propose two scales for describing a firm’s digital maturity: first, the impact that digital transformation has on a specific firm; second, the readiness of the firm to master the upcoming changes. We demonstrate the usefulness of this two scale measure by empirically deriving five digital maturity clusters as well as further empirical evidence. Our framework illuminates the monolithic block of digital maturity by allowing for a more differentiated firm-specific assessment – thus, it may serve as a first foundation for future research on digital maturity.",
"title": ""
},
{
"docid": "fc03ae4a9106e494d1b74451ca22190b",
"text": "With emergencies being, unfortunately, part of our lives, it is crucial to efficiently plan and allocate emergency response facilities that deliver effective and timely relief to people most in need. Emergency Medical Services (EMS) allocation problems deal with locating EMS facilities among potential sites to provide efficient and effective services over a wide area with spatially distributed demands. It is often problematic due to the intrinsic complexity of these problems. This paper reviews covering models and optimization techniques for emergency response facility location and planning in the literature from the past few decades, while emphasizing recent developments. We introduce several typical covering models and their extensions ordered from simple to complex, including Location Set Covering Problem (LSCP), Maximal Covering Location Problem (MCLP), Double Standard Model (DSM), Maximum Expected Covering Location Problem (MEXCLP), and Maximum Availability Location Problem (MALP) models. In addition, recent developments on hypercube queuing models, dynamic allocation models, gradual covering models, and cooperative covering models are also presented in this paper. The corresponding optimization X. Li (B) · Z. Zhao · X. Zhu Department of Industrial and Information Engineering, University of Tennessee, 416 East Stadium Hall, Knoxville, TN 37919, USA e-mail: [email protected] Z. Zhao e-mail: [email protected] X. Zhu e-mail: [email protected] T. Wyatt College of Nursing, University of Tennessee, 200 Volunteer Boulevard, Knoxville, TN 37996-4180, USA e-mail: [email protected]",
"title": ""
},
{
"docid": "22e3a0e31a70669f311fb51663a76f9c",
"text": "A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.",
"title": ""
},
{
"docid": "c63465c12bbf8474293c839f9ad73307",
"text": "Maintaining the balance or stability of legged robots in natural terrains is a challenging problem. Besides the inherent unstable characteristics of legged robots, the sources of instability are the irregularities of the ground surface and also the external pushes. In this paper, a push recovery framework for restoring the robot balance against external unknown disturbances will be demonstrated. It is assumed that the magnitude of exerted pushes is not large enough to use a reactive stepping strategy. In the comparison with previous methods, which a simplified model such as point mass model is used as the model of the robot for studying the push recovery problem, the whole body dynamic model will be utilized in present work. This enhances the capability of the robot to exploit all of the DOFs to recover its balance. To do so, an explicit dynamic model of a quadruped robot will be derived. The balance controller is based on the computation of the appropriate acceleration of the main body. It is calculated to return the robot to its desired position after the perturbation. This acceleration should be chosen under the stability and friction conditions. To calculate main body acceleration, an optimization problem is defined so that the stability, friction condition considered as its constraints. The simulation results show the effectiveness of the proposed algorithm. The robot can restore its balance against the large disturbance solely through the adjustment of the position and orientation of main body.",
"title": ""
},
{
"docid": "4147094e444521bcca3b24eceeabf45f",
"text": "Application designers must decide whether to store large objects (BLOBs) in a filesystem or in a database. Generally, this decision is based on factors such as application simplicity or manageability. Often, system performance affects these factors. Folklore tells us that databases efficiently handle large numbers of small objects, while filesystems are more efficient for large objects. Where is the break-even point? When is accessing a BLOB stored as a file cheaper than accessing a BLOB stored as a database record? Of course, this depends on the particular filesystem, database system, and workload in question. This study shows that when comparing the NTFS file system and SQL Server 2005 database system on a create, {read, replace}* delete workload, BLOBs smaller than 256KB are more efficiently handled by SQL Server, while NTFS is more efficient BLOBS larger than 1MB. Of course, this break-even point will vary among different database systems, filesystems, and workloads. By measuring the performance of a storage server workload typical of web applications which use get/put protocols such as WebDAV [WebDAV], we found that the break-even point depends on many factors. However, our experiments suggest that storage age, the ratio of bytes in deleted or replaced objects to bytes in live objects, is dominant. As storage age increases, fragmentation tends to increase. The filesystem we study has better fragmentation control than the database we used, suggesting the database system would benefit from incorporating ideas from filesystem architecture. Conversely, filesystem performance may be improved by using database techniques to handle small files. Surprisingly, for these studies, when average object size is held constant, the distribution of object sizes did not significantly affect performance. We also found that, in addition to low percentage free space, a low ratio of free space to average object size leads to fragmentation and performance degradation.",
"title": ""
},
{
"docid": "a9314b036f107c99545349ccdeb30781",
"text": "The development and implementation of language teaching programs can be approached in several different ways, each of which has different implications for curriculum design. Three curriculum approaches are described and compared. Each differs with respect to when issues related to input, process, and outcomes, are addressed. Forward design starts with syllabus planning, moves to methodology, and is followed by assessment of learning outcomes. Resolving issues of syllabus content and sequencing are essential starting points with forward design, which has been the major tradition in language curriculum development. Central design begins with classroom processes and methodology. Issues of syllabus and learning outcomes are not specified in detail in advance and are addressed as the curriculum is implemented. Many of the ‘innovative methods’ of the 1980s and 90s reflect central design. Backward design starts from a specification of learning outcomes and decisions on methodology and syllabus are developed from the learning outcomes. The Common European Framework of Reference is a recent example of backward design. Examples will be given to suggest how the distinction between forward, central and backward design can clarify the nature of issues and trends that have emerged in language teaching in recent years.",
"title": ""
},
{
"docid": "6975d01d114a8ecd45188cb99fd8b770",
"text": "Flowerlike α-Fe(2)O(3) nanostructures were synthesized via a template-free microwave-assisted solvothermal method. All chemicals used were low-cost compounds and environmentally benign. These flowerlike α-Fe(2)O(3) nanostructures had high surface area and abundant hydroxyl on their surface. When tested as an adsorbent for arsenic and chromium removal, the flowerlike α-Fe(2)O(3) nanostructures showed excellent adsorption properties. The adsorption mechanism for As(V) and Cr(VI) onto flowerlike α-Fe(2)O(3) nanostructures was elucidated by X-ray photoelectron spectroscopy and synchrotron-based X-ray absorption near edge structure analysis. The results suggested that ion exchange between surface hydroxyl groups and As(V) or Cr(VI) species was accounted for by the adsorption. With maximum capacities of 51 and 30 mg g(-1) for As(V) and Cr(VI), respectively, these low-cost flowerlike α-Fe(2)O(3) nanostructures are an attractive adsorbent for the removal of As(V) and Cr(VI) from water.",
"title": ""
},
{
"docid": "39e9fe27f70f54424df1feec453afde3",
"text": "Ontology is a sub-field of Philosophy. It is the study of the nature of existence and a branch of metaphysics concerned with identifying the kinds of things that actually exists and how to describe them. It describes formally a domain of discourse. Ontology is used to capture knowledge about some domain of interest and to describe the concepts in the domain and also to express the relationships that hold between those concepts. Ontology consists of finite list of terms (or important concepts) and the relationships among the terms (or Classes of Objects). Relationships typically include hierarchies of classes. It is an explicit formal specification of conceptualization and the science of describing the kind of entities in the world and how they are related (W3C). Web Ontology Language (OWL) is a language for defining and instantiating web ontologies (a W3C Recommendation). OWL ontology includes description of classes, properties and their instances. OWL is used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. Such representation of terms and their interrelationships is called ontology. OWL has facilities for expressing meaning and semantics and the ability to represent machine interpretable content on the Web. OWL is designed for use by applications that need to process the content of information instead of just presenting information to humans. This is used for knowledge representation and also is useful to derive logical consequences from OWL formal semantics.",
"title": ""
},
{
"docid": "4e56d4b3fe5ed2285487ea98915a359c",
"text": "A 1.2 V 60 GHz 120 mW phase-locked loop employing a quadrature differential voltage-controlled oscillator, a programmable charge pump, and a frequency quadrupler is presented. Implemented in a 90 m CMOS process and operating at 60 GHz with a 1.2 V supply, the PLL achieves a phase noise of −91 dBc/Hz at a frequency offset of 1 MHz.",
"title": ""
},
{
"docid": "3c6ced0f3778c2d3c123a1752c50d276",
"text": "Business intelligence (BI) has been referred to as the process of making better decisions through the use of people, processes, data and related tools and methodologies. Data mining is the extraction of hidden stating information from large databases. It is a powerful new technology with large potential to help the company's to focus on the most necessary information in the data warehouse. This study gives us an idea of how data mining is applied in exhibiting business intelligence thereby helping the organizations to make better decisions. Keywords-Business intelligence, data mining, database, information technology, management information system —————————— ——————————",
"title": ""
},
{
"docid": "53562dbb7087c83c6c84875e5e784b1b",
"text": "ALIZE is an open-source platform for speaker recognition. The ALIZE library implements a low-level statistical engine based on the well-known Gaussian mixture modelling. The toolkit includes a set of high level tools dedicated to speaker recognition based on the latest developments in speaker recognition such as Joint Factor Analysis, Support Vector Machine, i-vector modelling and Probabilistic Linear Discriminant Analysis. Since 2005, the performance of ALIZE has been demonstrated in series of Speaker Recognition Evaluations (SREs) conducted by NIST and has been used by many participants in the last NISTSRE 2012. This paper presents the latest version of the corpus and performance on the NIST-SRE 2010 extended task.",
"title": ""
},
{
"docid": "16b95a93fdbf0e86f4b08dca125bbcc4",
"text": "We propose a generative machine comprehension model that learns jointly to ask and answer questions based on documents. The proposed model uses a sequence-to-sequence framework that encodes the document and generates a question (answer) given an answer (question). Significant improvement in model performance is observed empirically on the SQuAD corpus, confirming our hypothesis that the model benefits from jointly learning to perform both tasks. We believe the joint model’s novelty offers a new perspective on machine comprehension beyond architectural engineering, and serves as a first step towards autonomous information seeking.",
"title": ""
},
{
"docid": "87f93c4d02b23b5d9488645bd39e49b8",
"text": "Information fusion is a field of research that strives to establish theories, techniques and tools that exploit synergies in data retrieved from multiple sources. In many real-world applications huge amounts of data need to be gathered, evaluated and analyzed in order to make the right decisions. An important key element of information fusion is the adequate presentation of the data that guides decision-making processes efficiently. This is where theories and tools developed in information visualization, visual data mining and human computer interaction (HCI) research can be of great support. This report presents an overview of information fusion and information visualization, highlighting the importance of the latter in information fusion research. Information visualization techniques that can be used in information fusion are presented and analyzed providing insights into its strengths and weakness. Problems and challenges regarding the presentation of information that the decision maker faces in the ground situation awareness scenario (GSA) lead to open questions that are assumed to be the focus of further research.",
"title": ""
},
{
"docid": "a9399439831a970fcce8e0101696325f",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "13a7fc51cd38d08fca983bc9eb9f7522",
"text": "Supply chain relationships play a significant role in supply chain management to respond to dynamic export market changes. If the dyadic exporter-producer relationships are still weak, they impede the emergence of a high performance supply chain within an export market. This paper develops a conceptual framework for understanding how exporter-producer relationships include not only the relationship system but also network and transaction systems; and thus introduces a more integrated way of looking at supply chain management based on information sharing as a key process between exporters and producers. To achieve this aim, supply chain relationships are reviewed from the perspectives of relationship marketing theory, network theory and transaction cost theory. Findings from previous research are discussed to provide a better understanding of how these relationships have evolved. A conceptual framework is built by offering a central proposition that specific dimensions of relationships, networks and transactions are the key antecedents of information sharing, which in turn influences export performance in supply chain management.",
"title": ""
},
{
"docid": "8e7cef98d1d3404dd5101ddde88489ef",
"text": "The present experiments were designed to determine the efficacy of metomidate hydrochloride as an alternative anesthetic with potential cortisol blocking properties for channel catfish Ictalurus punctatus. Channel catfish (75 g) were exposed to concentrations of metomidate ranging from 0.5 to 16 ppm for a period of 60 min. At 16-ppm metomidate, mortality occurred in 65% of the catfish. No mortalities were observed at concentrations of 8 ppm or less. The minimum concentration of metomidate producing desirable anesthetic properties was 6 ppm. At this concentration, acceptable induction and recovery times were observed in catfish ranging from 3 to 810 g average body weight. Plasma cortisol levels during metomidate anesthesia (6 ppm) were compared to fish anesthetized with tricaine methanesulfonate (100 ppm), quinaldine (30 ppm) and clove oil (100 ppm). Cortisol levels of catfish treated with metomidate and clove oil remained at baseline levels during 30 min of anesthesia (P>0.05). Plasma cortisol levels of tricaine methanesulfonate and quinaldine anesthetized catfish peaked approximately eightand fourfold higher (P< 0.05), respectively, than fish treated with metomidate. These results suggest that the physiological disturbance of channel catfish during routine-handling procedures and stress-related research could be reduced through the use of metomidate as an anesthetic. D 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
bf13df297bf633e8c7b9ef85c122c3ec
|
Unbiased Estimation of the Value of an Optimized Policy
|
[
{
"docid": "e5cd8e17db6c3c65320c0581dfecee79",
"text": "In this paper we propose methods for estimating heterogeneity in causal effects in experimental and observational studies and for conducting hypothesis tests about the magnitude of differences in treatment effects across subsets of the population. We provide a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects. The approach enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size, and without \"sparsity\" assumptions. We propose an \"honest\" approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation. Our approach builds on regression tree methods, modified to optimize for goodness of fit in treatment effects and to account for honest estimation. Our model selection criterion anticipates that bias will be eliminated by honest estimation and also accounts for the effect of making additional splits on the variance of treatment effect estimates within each subpopulation. We address the challenge that the \"ground truth\" for a causal effect is not observed for any individual unit, so that standard approaches to cross-validation must be modified. Through a simulation study, we show that for our preferred method honest estimation results in nominal coverage for 90% confidence intervals, whereas coverage ranges between 74% and 84% for nonhonest approaches. Honest estimation requires estimating the model with a smaller sample size; the cost in terms of mean squared error of treatment effects for our preferred method ranges between 7-22%.",
"title": ""
}
] |
[
{
"docid": "4eb9808144e04bf0c01121f2ec7261d2",
"text": "The rise of multicore computing has greatly increased system complexity and created an additional burden for software developers. This burden is especially troublesome when it comes to optimizing software on modern computing systems. Autonomic or adaptive computing has been proposed as one method to help application programmers handle this complexity. In an autonomic computing environment, system services monitor applications and automatically adapt their behavior to increase the performance of the applications they support. Unfortunately, applications often run as performance black-boxes and adaptive services must infer application performance from low-level information or rely on system-specific ad hoc methods. This paper proposes a standard framework, Application Heartbeats, which applications can use to communicate both their current and target performance and which autonomic services can use to query these values.\n The Application Heartbeats framework is designed around the well-known idea of a heartbeat. At important points in the program, the application registers a heartbeat. In addition, the interface allows applications to express their performance in terms of a desired heart rate and/or a desired latency between specially tagged heartbeats. Thus, the interface provides a standard method for an application to directly communicate its performance and goals while allowing autonomic services access to this information. Thus, Heartbeat-enabled applications are no longer performance black-boxes. This paper presents the Applications Heartbeats interface, characterizes two reference implementations (one suitable for clusters and one for multicore), and illustrates the use of Heartbeats with several examples of systems adapting behavior based on feedback from heartbeats.",
"title": ""
},
{
"docid": "645faf32f40732d291e604d7240f0546",
"text": "Fault Diagnostics and Prognostics has been an increasing interest in recent years, as a result of the increased degree of automation and the growing demand for higher performance, efficiency, reliability and safety in industrial systems. On-line fault detection and isolation methods have been developed for automated processes. These methods include data mining methodologies, artificial intelligence methodologies or combinations of the two. Data Mining is the statistical approach of extracting knowledge from data. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Activities in AI include searching, recognizing patterns and making logical inferences. This paper focuses on the various techniques used for Fault Diagnostics and Prognostics in Industry application domains.",
"title": ""
},
{
"docid": "e05e91be6ca5423d795f17be8a1cec10",
"text": "A novel active gate driver (AGD) for silicon carbide (SiC) MOSFET is studied in this paper. The gate driver (GD) increases the gate resistance value during the voltage plateau area of the gate-source voltage, in both turn-on and turn-off transitions. The proposed AGD is validated in both simulation and experimental environments and in hard-switching conditions. The simulation is evaluated in MATLAB/Simulink with 100 kHz of switching frequency and 600 V of dc-bus, whereas, the experimental part was realised at 100 kHz and 100 V of dc-bus. The results show that the gate driver can reduce the over-voltage and ringing, with low switching losses.",
"title": ""
},
{
"docid": "c13bf429abfb718e6c3557ae71f45f8f",
"text": "Researchers who study punishment and social control, like those who study other social phenomena, typically seek to generalize their findings from the data they have to some larger context: in statistical jargon, they generalize from a sample to a population. Generalizations are one important product of empirical inquiry. Of course, the process by which the data are selected introduces uncertainty. Indeed, any given dataset is but one of many that could have been studied. If the dataset had been different, the statistical summaries would have been different, and so would the conclusions, at least by a little. How do we calibrate the uncertainty introduced by data collection? Nowadays, this question has become quite salient, and it is routinely answered using wellknown methods of statistical inference, with standard errors, t-tests, and P-values, culminating in the “tabular asterisks” of Meehl (1978). These conventional answers, however, turn out to depend critically on certain rather restrictive assumptions, for instance, random sampling.1 When the data are generated by random sampling from a clearly defined population, and when the goal is to estimate population parameters from sample statistics, statistical inference can be relatively straightforward. The usual textbook formulas apply; tests of statistical significance and confidence intervals follow. If the random-sampling assumptions do not apply, or the parameters are not clearly defined, or the inferences are to a population that is only vaguely defined, the calibration of uncertainty offered by contemporary statistical technique is in turn rather questionable.2 Thus, investigators who use conventional statistical technique",
"title": ""
},
{
"docid": "e6332552fb29765414020ee97184cc07",
"text": "In A History of God, Karen Armstrong describes a division, made by fourth century Christians, between kerygma and dogma: 'religious truth … capable of being expressed and defined clearly and logically,' versus 'religious insights [that] had an inner resonance that could only be apprehended by each individual in his own time during … contemplation' (Armstrong, 1993, p.114). This early dual-process theory had its roots in Plato and Aristotle, who suggested a division between 'philosophy,' which could be 'expressed in terms of reason and thus capable of proof,' and knowledge contained in myths, 'which eluded scientific demonstration' (Armstrong, 1993, 113–14). This division—between what can be known and reasoned logically versus what can only be experienced and apprehended—continued to influence Western culture through the centuries, and arguably underlies our current dual-process theories of reasoning. In psychology, the division between these two forms of understanding have been described in many different ways. The underlying theme of 'overtly reasoned' versus 'perceived, intuited' often ties these dual process theories together. In Western culture, the latter form of thinking has often been maligned (Dijksterhuis and Nordgren, 2006; Gladwell, 2005; Lieberman, 2000). Recently, cultural psychologists have suggested that although the distinction itself—between reasoned and intuited knowl-edge—may have precedents in the intellectual traditions of other cultures, the privileging of the former rather than the latter may be peculiar to Western cultures The Chinese philosophical tradition illustrates this difference of emphasis. Instead of an epistemology that was guided by abstract rules, 'the Chinese in esteeming what was immediately percepti-ble—especially visually perceptible—sought intuitive instantaneous understanding through direct perception' (Nakamura, 1960/1988, p.171). Taoism—the great Chinese philosophical school besides Confucianism—developed an epistemology that was particularly oriented towards concrete perception and direct experience (Fung, 1922; Nakamura, 1960/1988). Moreover, whereas the Greeks were concerned with definitions and devising rules for the purposes of classification, for many influential Taoist philosophers, such as Chuang Tzu, '… the problem of … how terms and attributes are to be delimited, leads one in precisely the wrong direction. Classifying or limiting knowledge fractures the greater knowledge' (Mote, 1971, p.102).",
"title": ""
},
{
"docid": "7064d73864a64e2b76827e3252390659",
"text": "Abstmct-In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon’s technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. lf S,, denotes the subject’s capital after n bets at 27 for 1 odds, and lf it is assumed that the subject hnows the underlying prpbabillty distribution for the process X, then the entropy estimate ls H,(X) =(l -(l/n) log,, S,) log, 27 bits/symbol. If the subject does npt hnow the true probabllty distribution for the stochastic process, then Z&(X! ls an asymptotic upper bound for the true entropy. ff X is stationary, EH,,(X)+H(X), H(X) bell the true entropy of the process. Moreovzr, lf X is ergodic, then by the SLOW McMilhm-Brebnan theorem H,,(X)+H(X) with probability one. Preliminary indications are that English text has au entropy of approximately 1.3 bits/symbol, which agrees well with Shannon’s estimate.",
"title": ""
},
{
"docid": "2bdefbc66ae89ce8e48acf0d13041e0a",
"text": "We introduce an ac transconductance dispersion method (ACGD) to profile the oxide traps in an MOSFET without needing a body contact. The method extracts the spatial distribution of oxide traps from the frequency dependence of transconductance, which is attributed to charge trapping as modulated by an ac gate voltage. The results from this method have been verified by the use of the multifrequency charge pumping (MFCP) technique. In fact, this method complements the MFCP technique in terms of the trap depth that each method is capable of probing. We will demonstrate the method with InP passivated InGaAs substrates, along with electrically stressed Si N-MOSFETs.",
"title": ""
},
{
"docid": "8dee3ada764a40fce6b5676287496ccd",
"text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.",
"title": ""
},
{
"docid": "6659c5b954c14003e6e62c557fffa0f2",
"text": "Existing language models such as n-grams for software code often fail to capture a long context where dependent code elements scatter far apart. In this paper, we propose a novel approach to build a language model for software code to address this particular issue. Our language model, partly inspired by human memory, is built upon the powerful deep learning-based Long Short Term Memory architecture that is capable of learning long-term dependencies which occur frequently in software code. Results from our intrinsic evaluation on a corpus of Java projects have demonstrated the effectiveness of our language model. This work contributes to realizing our vision for DeepSoft, an end-to-end, generic deep learning-based framework for modeling software and its development process.",
"title": ""
},
{
"docid": "734fc66c7c745498ca6b2b7fc6780919",
"text": "In this paper, we investigate the use of an unsupervised label clustering technique and demonstrate that it enables substantial improvements in visual relationship prediction accuracy on the Person in Context (PIC) dataset. We propose to group object labels with similar patterns of relationship distribution in the dataset into fewer categories. Label clustering not only mitigates both the large classification space and class imbalance issues, but also potentially increases data samples for each clustered category. We further propose to incorporate depth information as an additional feature into the instance segmentation model. The additional depth prediction path supplements the relationship prediction model in a way that bounding boxes or segmentation masks are unable to deliver. We have rigorously evaluated the proposed techniques and performed various ablation analysis to validate the benefits of them.",
"title": ""
},
{
"docid": "0dd8e07502ed70b38fe6eb478115f5a8",
"text": "Department of Psychology Iowa State University, Ames, IA, USA Over the last 30 years, the video game industry has grown into a multi-billion dollar business. More children and adults are spending time playing computer games, consoles games, and online games than ever before. Violence is a dominant theme in most of the popular video games. This article reviews the current literature on effects of violent video game exposure on aggression-related variables. Exposure to violent video games causes increases in aggressive behavior, cognitions, and affect. Violent video game exposure also causes increases in physiological desensitization to real-life violence and decreases in helping behavior. The current video game literature is interpreted in terms of the general aggression model (GAM). Differences between violent video game exposure and violent television are also discussed.",
"title": ""
},
{
"docid": "b078c459182501c52f38400e363cb2ca",
"text": "Design considerations for piezoelectric-based energy harvesters for MEMS-scale sensors are presented, including a review of past work. Harvested ambient vibration energy can satisfy power needs of advanced MEMS-scale autonomous sensors for numerous applications, e.g., structural health monitoring. Coupled 1-D and modal (beam structure) electromechanical models are presented to predict performance, especially power, from measured low-level ambient vibration sources. Models are validated by comparison to prior published results and tests of a MEMS-scale device. A non-optimized prototype low-level ambient MEMS harvester producing 30 μW/cm3 is designed and modeled. A MEMS fabrication process for the prototype device is presented based on past work.",
"title": ""
},
{
"docid": "1846bbaac13e4a8d5c34b1657a5b634c",
"text": "Technology advancement entails an analog design scenario in which sophisticated signal processing algorithms are deployed in mixed-mode and radio frequency circuits to compensate for deterministic and random deficiencies of process technologies. This article reviews one such approach of applying a common communication technique, equalization, to correct for nonlinear distortions in analog circuits, which is analogized as non-ideal communication channels. The efficacy of this approach is showcased by a few latest advances in data conversion and RF transmission integrated circuits, where unprecedented energy efficiency, circuit linearity, and post-fabrication adaptability have been attained with low-cost digital processing.",
"title": ""
},
{
"docid": "46658067ffc4fd2ecdc32fbaaa606170",
"text": "Adolescent resilience research differs from risk research by focusing on the assets and resources that enable some adolescents to overcome the negative effects of risk exposure. We discuss three models of resilience-the compensatory, protective, and challenge models-and describe how resilience differs from related concepts. We describe issues and limitations related to resilience and provide an overview of recent resilience research related to adolescent substance use, violent behavior, and sexual risk behavior. We then discuss implications that resilience research has for intervention and describe some resilience-based interventions.",
"title": ""
},
{
"docid": "05049ac85552c32f2c98d7249a038522",
"text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.",
"title": ""
},
{
"docid": "7252372bdacaa69b93e52a7741c8f4c2",
"text": "This paper introduces a novel type of actuator that is investigated by ESA for force-reflection to a wearable exoskeleton. The actuator consists of a DC motor that is relocated from the joint by means of Bowden cable transmissions. The actuator shall support the development of truly ergonomic and compact wearable man-machine interfaces. Important Bowden cable transmission characteristics are discussed, which dictate a specific hardware design for such an actuator. A first prototype is shown, which was used to analyze these basic characteristics of the transmissions and to proof the overall actuation concept. A second, improved prototype is introduced, which is currently used to investigate the achievable performance as a master actuator in a master-slave control with force-feedback. Initial experimental results are presented, which show good actuator performance in a 4 channel control scheme with a slave joint. The actuator features low movement resistance in free motion and can reflect high torques during hard contact situations. High contact stability can be achieved. The actuator seems therefore well suited to be implemented into the ESA exoskeleton for space-robotic telemanipulation",
"title": ""
},
{
"docid": "87baf6381f4297b6e9af7659ef111f5c",
"text": "Indonesian Sign Language System (ISLS) has been used widely by Indonesian for translating the sign language of disabled people to many applications, including education or entertainment. ISLS consists of static and dynamic gestures in representing words or sentences. However, individual variations in performing sign language have been a big challenge especially for developing automatic translation. The accuracy of recognizing the signs will decrease linearly with the increase of variations of gestures. This research is targeted to solve these issues by implementing the multimodal methods: leap motion and Myo armband controllers (EMG electrodes). By combining these two data and implementing Naïve Bayes classifier, we hypothesized that the accuracy of gesture recognition system for ISLS then can be increased significantly. The data streams captured from hand-poses were based on time-domain series method which will warrant the generated data synchronized accurately. The selected features for leap motion data would be based on fingers positions, angles, and elevations, while for the Myo armband would be based on electrical signal generated by eight channels of EMG electrodes relevant to the activities of linked finger’s and forearm muscles. This study will investigate the accuracy of gesture recognition by using either single modal or multimodal for translating Indonesian sign language. For multimodal strategy, both features datasets were merged into a single dataset which was then used for generating a model for each hand gesture. The result showed that there was a significant improvement on its accuracy, from 91% for single modal using leap motion to 98% for multi-modal (combined with Myo armband). The confusion matrix of multimodal method also showed better performance than the single-modality. Finally, we concluded that the implementation of multi-modal controllers for ISLS’s gesture recognition showed better accuracy and performance compared of single modality of using only leap motion controller.",
"title": ""
},
{
"docid": "098da928abe37223e0eed0c6bf0f5747",
"text": "With the proliferation of social media, fashion inspired from celebrities, reputed designers as well as fashion influencers has shortned the cycle of fashion design and manufacturing. However, with the explosion of fashion related content and large number of user generated fashion photos, it is an arduous task for fashion designers to wade through social media photos and create a digest of trending fashion. Designers do not just wish to have fashion related photos at one place but seek search functionalities that can let them search photos with natural language queries such as ‘red dress’, ’vintage handbags’, etc in order to spot the trends. This necessitates deep parsing of fashion photos on social media to localize and classify multiple fashion items from a given fashion photo. While object detection competitions such as MSCOCO have thousands of samples for each of the object categories, it is quite difficult to get large labeled datasets for fast fashion items. Moreover, state-of-the-art object detectors [2, 7, 9] do not have any functionality to ingest large amount of unlabeled data available on social media in order to fine tune object detectors with labeled datasets. In this work, we show application of a generic object detector [11], that can be pretrained in an unsupervised manner, on 24 categories from recently released Open Images V4 dataset. We first train the base architecture of the object detector using unsupervisd learning on 60K unlabeled photos from 24 categories gathered from social media, and then subsequently fine tune it on 8.2K labeled photos from Open Images V4 dataset. On 300 × 300 image inputs, we achieve 72.7% mAP on a test dataset of 2.4K photos while performing 11% to 17% better as compared to the state-of-the-art object detectors. We show that this improvement is due to our choice of architecture that lets us do unsupervised learning and that performs significantly better in identifying small objects. 1",
"title": ""
},
{
"docid": "59d6765507415b0365f3193843d01459",
"text": "Password typing is the most widely used identity verification method in World Wide Web based Electronic Commerce. Due to its simplicity, however, it is vulnerable to imposter attacks. Keystroke dynamics and password checking can be combined to result in a more secure verification system. We propose an autoassociator neural network that is trained with the timing vectors of the owner's keystroke dynamics and then used to discriminate between the owner and an imposter. An imposter typing the correct password can be detected with very high accuracy using the proposed approach. This approach can be effectively implemented by a Java applet and used in the World Wide Web.",
"title": ""
},
{
"docid": "f77982f55dfd6f188b8fb09e7c36c695",
"text": "Bisimulation is the primitive notion of equivalence between concurrent processes in Milner's Calculus of Communicating Systems (CCS); there is a nontrivial game-like protocol for distinguishing nonbisimular processes. In contrast, process distinguishability in Hoare's theory of Communicating Sequential Processes (CSP) is determined solely on the basis of traces of visible actions. We examine what additional operations are needed to explain bisimulation similarly—specifically in the case of finitely branching processes without silent moves. We formulate a general notion of Structured Operational Semantics for processes with Guarded recursion (GSOS), and demonstrate that bisimulation does not agree with trace congruence with respect to any set of GSOS-definable contexts. In justifying the generality and significance of GSOS's, we work out some of the basic proof theoretic facts which justify the SOS discipline.",
"title": ""
}
] |
scidocsrr
|
39745f121b40e98c9612f0fa82981def
|
Development of grip amplified glove using bi-articular mechanism with pneumatic artificial rubber muscle
|
[
{
"docid": "5da1f0692a71e4dde4e96009b99e0c13",
"text": "The McKibben artificial muscle is a pneumatic actuator whose properties include a very high force to weight ratio. This characteristic makes it very attractive for a wide range of applications such as mobile robots and prosthetic appliances for the disabled. Typical applications often require a significant number of repeated contractions and extensions or cycles of the actuator. This repeated action leads to fatigue and failure of the actuator, yielding a life span that is often shorter than its more common robotic counterparts such as electric motors or pneumatic cylinders. In this paper, we develop a model that predicts the maximum number of life cycles of the actuator based on available uniaxial tensile properties of the actuator’s inner bladder. Experimental results, which validate the model, reveal McKibben actuators fabricated with natural latex rubber bladders have a fatigue limit 24 times greater than actuators fabricated with synthetic silicone rubber at large contraction ratios.",
"title": ""
},
{
"docid": "26a2a78909393566ef618a7d56b342d3",
"text": "The purpose of this study is to develop a wearable power assist device for hand grasping in order to support activity of daily living (ADL) safely and easily. In this paper, the mechanism of the developed power assist device is described, and then the effectiveness of this device is discussed experimentally.",
"title": ""
}
] |
[
{
"docid": "785b9b8522d2957fc5ecf53bf3c408e0",
"text": "Clinical trial investigators often record a great deal of baseline data on each patient at randomization. When reporting the trial's findings such baseline data can be used for (i) subgroup analyses which explore whether there is evidence that the treatment difference depends on certain patient characteristics, (ii) covariate-adjusted analyses which aim to refine the analysis of the overall treatment difference by taking account of the fact that some baseline characteristics are related to outcome and may be unbalanced between treatment groups, and (iii) baseline comparisons which compare the baseline characteristics of patients in each treatment group for any possible (unlucky) differences. This paper examines how these issues are currently tackled in the medical journals, based on a recent survey of 50 trial reports in four major journals. The statistical ramifications are explored, major problems are highlighted and recommendations for future practice are proposed. Key issues include: the overuse and overinterpretation of subgroup analyses; the underuse of appropriate statistical tests for interaction; inconsistencies in the use of covariate-adjustment; the lack of clear guidelines on covariate selection; the overuse of baseline comparisons in some studies; the misuses of significance tests for baseline comparability, and the need for trials to have a predefined statistical analysis plan for all these uses of baseline data.",
"title": ""
},
{
"docid": "0f5c1d2503a2845e409d325b085bf600",
"text": "We present Accel, a novel semantic video segmentation system that achieves high accuracy at low inference cost by combining the predictions of two network branches: (1) a reference branch that extracts high-detail features on a reference keyframe, and warps these features forward using frame-to-frame optical flow estimates, and (2) an update branch that computes features of adjustable quality on the current frame, performing a temporal update at each video frame. The modularity of the update branch, where feature subnetworks of varying layer depth can be inserted (e.g. ResNet-18 to ResNet-101), enables operation over a new, state-of-the-art accuracy-throughput trade-off spectrum. Over this curve, Accel models achieve both higher accuracy and faster inference times than the closest comparable single-frame segmentation networks. In general, Accel significantly outperforms previous work on efficient semantic video segmentation, correcting warping-related error that compounds on datasets with complex dynamics. Accel is end-to-end trainable and highly modular: the reference network, the optical flow network, and the update network can each be selected independently, depending on application requirements, and then jointly fine-tuned. The result is a robust, general system for fast, high-accuracy semantic segmentation on video.",
"title": ""
},
{
"docid": "5c496d19a841360fc874ef6989b3a6b4",
"text": "The electronic circuit breaker used in DC microgrids are required to work within their safe operating areas bounded by temperature, voltage and current limits. Traditional approach managed to protect these switches through rapid current cut-off operations at over-load or fault situations, but failing to avoid the disturbance induced by transient current surges or noises which are not harmful to the grid operations. Aiming to increase the quality of circuit breaker operations and furthermore improve its reliability, this paper proposed a SiC MOSFET based DC circuit breaker based on the variable time-delay protection scheme. The cutoff operations only take place after proper delay time, which are precisely catered according to the transient thermal properties of SiC devices and the properties of DC loads. The proposed scheme has been implemented with hardware prototype and experimentally verified under different fault situations.",
"title": ""
},
{
"docid": "bed3e58bc8e69242e6e00c7d13dabb93",
"text": "The convergence of online learning algorithms is analyzed using the tools of the stochastic approximation theory, and proved under very weak conditions. A general framework for online learning algorithms is first presented. This framework encompasses the most common online learning algorithms in use today, as illustrated by several examples. The stochastic approximation theory then provides general results describing the convergence of all these learning algorithms at once. Revised version, May 2018.",
"title": ""
},
{
"docid": "d3e45d254e7a09432b8f4b7729e44fa3",
"text": "In this paper, an approach of estimating signal parameters via rotational invariance technique (ESPRIT) is proposed for two-dimensional (2-D) localization of incoherently distributed (ID) sources in large-scale/massive multiple-input multiple-output (MIMO) systems. The traditional ESPRIT-based methods are valid only for one-dimensional (1-D) localization of the ID sources. By contrast, in the proposed approach the signal subspace is constructed for estimating the nominal azimuth and elevation direction-of-arrivals and the angular spreads. The proposed estimator enjoys closed-form expressions and hence it bypasses the searching over the entire feasible field. Therefore, it imposes significantly lower computational complexity than the conventional 2-D estimation approaches. Our analysis shows that the estimation performance of the proposed approach improves when the large-scale/massive MIMO systems are employed. The approximate Cramér-Rao bound of the proposed estimator for the 2-D localization is also derived. Numerical results demonstrate that albeit the proposed estimation method is comparable with the traditional 2-D estimators in terms of performance, it benefits from a remarkably lower computational complexity.",
"title": ""
},
{
"docid": "a23fd89da025d456f9fe3e8a47595c6a",
"text": "Mobile devices are especially vulnerable nowadays to malware attacks, thanks to the current trend of increased app downloads. Despite the significant security and privacy concerns it received, effective malware detection (MD) remains a significant challenge. This paper tackles this challenge by introducing a streaminglized machine learning-based MD framework, StormDroid: (i) The core of StormDroid is based on machine learning, enhanced with a novel combination of contributed features that we observed over a fairly large collection of data set; and (ii) we streaminglize the whole MD process to support large-scale analysis, yielding an efficient and scalable MD technique that observes app behaviors statically and dynamically. Evaluated on roughly 8,000 applications, our combination of contributed features improves MD accuracy by almost 10% compared with state-of-the-art antivirus systems; in parallel our streaminglized process, StormDroid, further improves efficiency rate by approximately three times than a single thread.",
"title": ""
},
{
"docid": "e259e255f9acf3fa1e1429082e1bf1de",
"text": "In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.",
"title": ""
},
{
"docid": "4c82ba56d6532ddc57c2a2978de7fe5a",
"text": "This paper presents a Model Reference Adaptive System (MRAS) based speed sensorless estimation of vector controlled Induction Motor Drive. MRAS based techniques are one of the best methods to estimate the rotor speed due to its performance and straightforward stability approach. Depending on the type of tuning signal driving the adaptation mechanism, MRAS estimators are classified into rotor flux based MRAS, back e.m.f based MRAS, reactive power based MRAS and artificial neural network based MRAS. In this paper, the performance of the rotor flux based MRAS for estimating the rotor speed was studied. Overview on the IM mathematical model is briefly summarized to establish a physical basis for the sensorless scheme used. Further, the theoretical basis of indirect field oriented vector control is explained in detail and it is implemented in MATLAB/SIMULINK.",
"title": ""
},
{
"docid": "a730a3c079e9cdb2c77fa945cbc8f685",
"text": "In the last 30 years a sea of change has occurred in the outlook for these cancers. Chemotherapy has allowed better local and systemic control5, 6. Better imaging with CT and MRI has allowed the surgeon to accurately define the extent and therefore plan tumor resection. Advances in bioengineering have provided exciting options for reconstruction and the world has moved from amputation to limb salvage. In osteosarcoma, survival improved from dismal 10-20% to 50-70%7, 8. Long term studies showed that limb salvage operations, performed with wide margins and chemotherapy did not compromise the survival or local control compared to an amputation9-14.",
"title": ""
},
{
"docid": "6993271cf5ef62ced8fa93519720b355",
"text": "Vehicle-to-grid (V2G) has been proposed as a way to increase the adoption rate of electric vehicles (EVs). Unidirectional V2G is especially attractive because it requires little if any additional infrastructure other than communication between the EV and an aggregator. The aggregator in turn combines the capacity of many EVs to bid into energy markets. In this work an algorithm for unidirectional regulation is developed for use by an aggregator. Several smart charging algorithms are used to set the point about which the rate of charge varies while performing regulation. An aggregator profit maximization algorithm is formulated with optional system load and price constraints analogous to the smart charging algorithms. Simulations on a hypothetical group of 10 000 commuter EVs in the Pacific Northwest verify that the optimal algorithms increase aggregator profits while reducing system load impacts and customer costs.",
"title": ""
},
{
"docid": "fed4de5870b41715d7f9abc0714db99d",
"text": "This paper presents an approach to stereovision applied to small water vehicles. By using a small low-cost computer and inexpensive off-the-shelf components, we were able to develop an autonomous driving system capable of following other vehicle and moving along paths delimited by coloured buoys. A pair of webcams was used and, with an ultrasound sensor, we were also able to implement a basic frontal obstacle avoidance system. With the help of the stereoscopic system, we inferred the position of specific objects that serve as references to the ASV guidance. The final system is capable of identifying and following targets in a distance of over 5 meters. This system was integrated with the framework already existent and shared by all the vehicles used in the OceanSys research group at INESC - DEEC/FEUP.",
"title": ""
},
{
"docid": "a12c4e820254b07f322727affe23cb9d",
"text": "Attributed network embedding has been widely used in modeling real-world systems. The obtained low-dimensional vector representations of nodes preserve their proximity in terms of both network topology and node attributes, upon which different analysis algorithms can be applied. Recent advances in explanation-based learning and human-in-the-loop models show that by involving experts, the performance of many learning tasks can be enhanced. It is because experts have a better cognition in the latent information such as domain knowledge, conventions, and hidden relations. It motivates us to employ experts to transform their meaningful cognition into concrete data to advance network embedding. However, learning and incorporating the expert cognition into the embedding remains a challenging task. Because expert cognition does not have a concrete form, and is difficult to be measured and laborious to obtain. Also, in a real-world network, there are various types of expert cognition such as the comprehension of word meaning and the discernment of similar nodes. It is nontrivial to identify the types that could lead to a significant improvement in the embedding. In this paper, we study a novel problem of exploring expert cognition for attributed network embedding and propose a principled framework NEEC. We formulate the process of learning expert cognition as a task of asking experts a number of concise and general queries. Guided by the exemplar theory and prototype theory in cognitive science, the queries are systematically selected and can be generalized to various real-world networks. The returned answers from the experts contain their valuable cognition. We model them as new edges and directly add into the attributed network, upon which different embedding methods can be applied towards a more informative embedding representation. Experiments on real-world datasets verify the effectiveness and efficiency of NEEC.",
"title": ""
},
{
"docid": "02842ef0acfed3d59612ce944b948adf",
"text": "One of the common modalities for observing mental activity is electroencephalogram (EEG) signals. However, EEG recording is highly susceptible to various sources of noise and to inter subject differences. In order to solve these problems we present a deep recurrent neural network (RNN) architecture to learn robust features and predict the levels of cognitive load from EEG recordings. Using a deep learning approach, we first transform the EEG time series into a sequence of multispectral images which carries spatial information. Next, we train our recurrent hybrid network to learn robust representations from the sequence of frames. The proposed approach preserves spectral, spatial and temporal structures and extracts features which are less sensitive to variations along each dimension. Our results demonstrate cognitive memory load prediction across four different levels with an overall accuracy of 92.5% during the memory task execution and reduce classification error to 7.61% in comparison to other state-of-art techniques.",
"title": ""
},
{
"docid": "bfc0a8c7d5fb816b9d8b8a0600893536",
"text": "Meta-parameters in reinforcement learning should be tuned to the environmental dynamics and the animal performance. Here, we propose a biologically plausible meta-reinforcement learning algorithm for tuning these meta-parameters in a dynamic, adaptive manner. We tested our algorithm in both a simulation of a Markov decision task and in a non-linear control task. Our results show that the algorithm robustly finds appropriate meta-parameter values, and controls the meta-parameter time course, in both static and dynamic environments. We suggest that the phasic and tonic components of dopamine neuron firing can encode the signal required for meta-learning of reinforcement learning.",
"title": ""
},
{
"docid": "a9d1cdfd844a7347d255838d5eb74b03",
"text": "An economy based on the exchange of capital, assets and services between individuals has grown significantly, spurred by proliferation of internet-based platforms that allow people to share underutilized resources and trade with reasonably low transaction costs. The movement toward this economy of “sharing” translates into market efficiencies that bear new products, reframe established services, have positive environmental effects, and may generate overall economic growth. This emerging paradigm, entitled the collaborative economy, is disruptive to the conventional company-driven economic paradigm as evidenced by the large number of peer-to-peer based services that have captured impressive market shares sectors ranging from transportation and hospitality to banking and risk capital. The panel explores economic, social, and technological implications of the collaborative economy, how digital technologies enable it, and how the massive sociotechnical systems embodied in these new peer platforms may evolve in response to the market and social forces that drive this emerging ecosystem.",
"title": ""
},
{
"docid": "c999bd0903b53285c053c76f9fcc668f",
"text": "In this paper, a bibliographical review on reconfigurable (active) fault-tolerant control systems (FTCS) is presented. The existing approaches to fault detection and diagnosis (FDD) and fault-tolerant control (FTC) in a general framework of active fault-tolerant control systems (AFTCS) are considered and classified according to different criteria such as design methodologies and applications. A comparison of different approaches is briefly carried out. Focuses in the field on the current research are also addressed with emphasis on the practical application of the techniques. In total, 376 references in the open literature, dating back to 1971, are compiled to provide an overall picture of historical, current, and future developments in this area. # 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3392de7e3182420e882617f0baff389a",
"text": "BACKGROUND\nIndividuals who initiate cannabis use at an early age, when the brain is still developing, might be more vulnerable to lasting neuropsychological deficits than individuals who begin use later in life.\n\n\nMETHODS\nWe analyzed neuropsychological test results from 122 long-term heavy cannabis users and 87 comparison subjects with minimal cannabis exposure, all of whom had undergone a 28-day period of abstinence from cannabis, monitored by daily or every-other-day observed urine samples. We compared early-onset cannabis users with late-onset users and with controls, using linear regression controlling for age, sex, ethnicity, and attributes of family of origin.\n\n\nRESULTS\nThe 69 early-onset users (who began smoking before age 17) differed significantly from both the 53 late-onset users (who began smoking at age 17 or later) and from the 87 controls on several measures, most notably verbal IQ (VIQ). Few differences were found between late-onset users and controls on the test battery. However, when we adjusted for VIQ, virtually all differences between early-onset users and controls on test measures ceased to be significant.\n\n\nCONCLUSIONS\nEarly-onset cannabis users exhibit poorer cognitive performance than late-onset users or control subjects, especially in VIQ, but the cause of this difference cannot be determined from our data. The difference may reflect (1). innate differences between groups in cognitive ability, antedating first cannabis use; (2). an actual neurotoxic effect of cannabis on the developing brain; or (3). poorer learning of conventional cognitive skills by young cannabis users who have eschewed academics and diverged from the mainstream culture.",
"title": ""
},
{
"docid": "9ff6d7a36646b2f9170bd46d14e25093",
"text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.",
"title": ""
},
{
"docid": "3a03ad3bc42f899e050491b04c01b78c",
"text": "A passive/active L/S-band (PALS) microwave aircraft instrument to measure ocean salinity and soil moisture has been built and tested. Because the L-band brightness temperatures associated with salinity changes are expected to be small, it was necessary to build a very sensitive and stable system. This new instrument has dual-frequency, dual polarization radiometer and radar sensors. The antenna is a high beam efficiency conical horn. The PALS instrument was installed on the NCAR C-130 aircraft and soil moisture measurements were made in support of the Southern Great Plains 1999 experiment in Oklahoma from July 8–14, 1999. Data taken before and after a rainstorm showed significant changes in the brightness temperatures, polarization ratios and radar backscatter, as a function of soil moisture. Salinity measurement missions were flown on July 17–19, 1999, southeast of Norfolk, VA, over the Gulf Stream. The measurements indicated a clear and repeatable salinity signal during these three days, which was in good agreement with the Cape Hatteras ship salinity data. Data was also taken in the open ocean and a small decrease of 0.2 K was measured in the brightness temperature, which corresponded to the salinity increase of 0.4 psu measured by the M/V Oleander vessel.",
"title": ""
}
] |
scidocsrr
|
78b2ef4bd48daf5f99403fbdff835aac
|
Affective News : The Automated Coding of Sentiment in Political Texts LORI YOUNG and STUART SOROKA
|
[
{
"docid": "fc70a1820f838664b8b51b5adbb6b0db",
"text": "This paper presents a method for identifying an opinion with its holder and topic, given a sentence from online news media texts. We introduce an approach of exploiting the semantic structure of a sentence, anchored to an opinion bearing verb or adjective. This method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from FrameNet. We decompose our task into three phases: identifying an opinion-bearing word, labeling semantic roles related to the word in the sentence, and then finding the holder and the topic of the opinion word among the labeled semantic roles. For a broader coverage, we also employ a clustering technique to predict the most probable frame for a word which is not defined in FrameNet. Our experimental results show that our system performs significantly better than the baseline.",
"title": ""
},
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] |
[
{
"docid": "72b8d077bb96fcb4d14b09f9bb85132f",
"text": "Locomotion at low Reynolds number is not possible with cycles of reciprocal motion, an example being the oscillation of a single pair of rigid paddles or legs. Here, I demonstrate the possibility of swimming with two or more pairs of legs. They are assumed to oscillate collectively in a metachronal wave pattern in a minimal model based on slender-body theory for Stokes flow. The model predicts locomotion in the direction of the traveling wave, as commonly observed along the body of free-swimming crustaceans. The displacement of the body and the swimming efficiency depend on the number of legs, the amplitude, and the phase of oscillations. This study shows that paddling legs with distinct orientations and phases offers a simple mechanism for driving flow.",
"title": ""
},
{
"docid": "b89f999bd27a6cbe1865f8853e384eba",
"text": "A rescue crawler robot with flipper arms has high ability to get over rough terrain, but it is hard to control its flipper arms in remote control. The authors aim at development of a semi-autonomous control system for the solution. In this paper, the authors propose a sensor reflexive method that controls these flippers autonomously for getting over unknown steps. Our proposed method is effective in unknown and changeable environment. The authors applied the proposed method to Aladdin, and examined validity of these control rules in unknown environment.",
"title": ""
},
{
"docid": "d4b9d294d60ef001bee3a872b17a75b1",
"text": "Real-time formative assessment of student learning has become the subject of increasing attention. Students' textual responses to short answer questions offer a rich source of data for formative assessment. However, automatically analyzing textual constructed responses poses significant computational challenges, and the difficulty of generating accurate assessments is exacerbated by the disfluencies that occur prominently in elementary students' writing. With robust text analytics, there is the potential to accurately analyze students' text responses and predict students' future success. In this paper, we present WriteEval, a hybrid text analytics method for analyzing student-composed text written in response to constructed response questions. Based on a model integrating a text similarity technique with a semantic analysis technique, WriteEval performs well on responses written by fourth graders in response to short-text science questions. Further, it was found that WriteEval's assessments correlate with summative analyses of student performance.",
"title": ""
},
{
"docid": "751524df9e2f7cbec6f3c3f40cd73552",
"text": "Syntactic knowledge is widely held to be partially innate, rather than learned. In a classic example, it is sometimes argued that children know the proper use of anaphoric one, although that knowledge could not have been learned from experience. Lidz et al. [Lidz, J., Waxman, S., & Freedman, J. (2003). What infants know about syntax but couldn't have learned: Experimental evidence for syntactic structure at 18 months. Cognition, 89, B65-B73.] pursue this argument, and present corpus and experimental evidence that appears to support it; they conclude that specific aspects of this knowledge must be innate. We demonstrate, contra Lidz et al., that this knowledge may in fact be acquired from the input, through a simple Bayesian learning procedure. The learning procedure succeeds because it is sensitive to the absence of particular input patterns--an aspect of learning that is apparently overlooked by Lidz et al. More generally, we suggest that a prominent form of the \"argument from poverty of the stimulus\" suffers from the same oversight, and is as a result logically unsound.",
"title": ""
},
{
"docid": "a36e20361e39b46d2e161b48596c0fd5",
"text": "This paper investigates the issue of interference avoidance in body area networks (BANs). IEEE 802.15 Task Group 6 presented several schemes to reduce such interference, but these schemes are still not proper solutions for BANs. We present a novel distributed TDMA-based beacon interval shifting scheme that reduces interference in the BANs. A design goal of the scheme is to avoid the wakeup period of each BAN coinciding with other networks by employing carrier sensing before a beacon transmission. We analyze the beacon interval shifting scheme and investigate the proper back-off length when the channel is busy. We compare the performance of the proposed scheme with the schemes presented in IEEE 802.15 Task Group 6 using an OMNeT++ simulation. The simulation results show that the proposed scheme has a lower packet loss, energy consumption, and delivery-latency than the schemes of IEEE 802.15 Task Group 6.",
"title": ""
},
{
"docid": "7f7d51fc848cf9e47f492a4e33959fca",
"text": "Future wars will be cyber wars and the attacks will be a sturdy amalgamation of cryptography along with malware to distort information systems and its security. The explosive Internet growth facilitates cyber-attacks. Web threats include risks, that of loss of confidential data and erosion of consumer confidence in e-commerce. The emergence of cyber hack jacking threat in the new form in cyberspace is known as ransomware or crypto virus. The locker bot waits for specific triggering events, to become active. It blocks the task manager, command prompt and other cardinal executable files, a thread checks for their existence every few milliseconds, killing them if present. Imposing serious threats to the digital generation, ransomware pawns the Internet users by hijacking their system and encrypting entire system utility files and folders, and then demanding ransom in exchange for the decryption key it provides for release of the encrypted resources to its original form. We present in this research, the anatomical study of a ransomware family that recently picked up quite a rage and is called CTB locker, and go on to the hard money it makes per user, and its source C&C server, which lies with the Internet's greatest incognito mode-The Dark Net. Cryptolocker Ransomware or the CTB Locker makes a Bitcoin wallet per victim and payment mode is in the form of digital bitcoins which utilizes the anonymity network or Tor gateway. CTB Locker is the deadliest malware the world ever encountered.",
"title": ""
},
{
"docid": "f2026d9d827c088711875acc56b12b70",
"text": "The goal of the study is to formalize the concept of viral marketing (VM) as a close derivative of contagion models from epidemiology. The study examines in detail the two common mathematical models of epidemic spread and their marketing implications. The SIR and SEIAR models of infectious disease spread are examined in detail. From this analysis of the epidemiological foundations along with a review of relevant marketing literature, a marketing model of VM is developed. This study demonstrates the key elements that define viral marketing as a formal marketing concept and the distinctive mechanical features that differ from conventional marketing.",
"title": ""
},
{
"docid": "8d2aeee4064a2d6e65afeaf5330b2c49",
"text": "In this paper we discuss verification and validation of simulation models. Four different approaches to deciding model validity are described; two different paradigms that relate verification and validation to the model development process are presented; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are discussed; a way to document results is given; a recommended procedure for model validation is presented; and accreditation is briefly discussed.",
"title": ""
},
{
"docid": "32aab68579ffc866a187824bc502efc3",
"text": "This paper presents an automatic people counting system based on face detection, where the number of people passing through a gate or door is counted by setting a video camera. The basic idea is to first use the frame difference to detect the rough edges of moving people and then use the chromatic feature to locate the people face. Based on NCC (Normalized Color Coordinates) color space, the initial face candidate is obtained by detecting the skin color region and then the face feature of the candidate is analyzed to determine whether the candidate is real face or not. After face detection, a person will be tracked by following the detected face and then this person will counted if its face touches the counting line. Experimental results show that the proposed people counting algorithm can provide a high count accuracy of 80% on average for the crowded pedestrians.",
"title": ""
},
{
"docid": "349b6f11d60d851a23d2d6f9ebe88e81",
"text": "In the hybrid approach, neural network output directly serves as hidden Markov model (HMM) state posterior probability estimates. In contrast to this, in the tandem approach neural network output is used as input features to improve classic Gaussian mixture model (GMM) based emission probability estimates. This paper shows that GMM can be easily integrated into the deep neural network framework. By exploiting its equivalence with the log-linear mixture model (LMM), GMM can be transformed to a large softmax layer followed by a summation pooling layer. Theoretical and experimental results indicate that the jointly trained and optimally chosen GMM and bottleneck tandem features cannot perform worse than a hybrid model. Thus, the question “hybrid vs. tandem” simplifies to optimizing the output layer of a neural network. Speech recognition experiments are carried out on a broadcast news and conversations task using up to 12 feed-forward hidden layers with sigmoid and rectified linear unit activation functions. The evaluation of the LMM layer shows recognition gains over the classic softmax output.",
"title": ""
},
{
"docid": "0e19123e438f39c4404d4bd486348247",
"text": "Boundary and edge cues are highly beneficial in improving a wide variety of vision tasks such as semantic segmentation, object recognition, stereo, and object proposal generation. Recently, the problem of edge detection has been revisited and significant progress has been made with deep learning. While classical edge detection is a challenging binary problem in itself, the category-aware semantic edge detection by nature is an even more challenging multi-label problem. We model the problem such that each edge pixel can be associated with more than one class as they appear in contours or junctions belonging to two or more semantic classes. To this end, we propose a novel end-to-end deep semantic edge learning architecture based on ResNet and a new skip-layer architecture where category-wise edge activations at the top convolution layer share and are fused with the same set of bottom layer features. We then propose a multi-label loss function to supervise the fused activations. We show that our proposed architecture benefits this problem with better performance, and we outperform the current state-of-the-art semantic edge detection methods by a large margin on standard data sets such as SBD and Cityscapes.",
"title": ""
},
{
"docid": "527c1e2a78e7f171025231a475a828b9",
"text": "Cryptography is the science to transform the information in secure way. Encryption is best alternative to convert the data to be transferred to cipher data which is an unintelligible image or data which cannot be understood by any third person. Images are form of the multimedia data. There are many image encryption schemes already have been proposed, each one of them has its own potency and limitation. This paper presents a new algorithm for the image encryption/decryption scheme which has been proposed using chaotic neural network. Chaotic system produces the same results if the given inputs are same, it is unpredictable in the sense that it cannot be predicted in what way the system's behavior will change for any little change in the input to the system. The objective is to investigate the use of ANNs in the field of chaotic Cryptography. The weights of neural network are achieved based on chaotic sequence. The chaotic sequence generated and forwarded to ANN and weighs of ANN are updated which influence the generation of the key in the encryption algorithm. The algorithm has been implemented in the software tool MATLAB and results have been studied. To compare the relative performance peak signal to noise ratio (PSNR) and mean square error (MSE) are used.",
"title": ""
},
{
"docid": "99d99ce673dfc4a6f5bf3e7d808a5570",
"text": "We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space. A specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track. Novel deep reinforcement learning architectures are studied for effective modeling of the value function associated with actions comprised of interdependent sub-actions. The proposed model, which represents dependence between sub-actions through a bi-directional LSTM, gives the best performance across different experimental configurations and domains, and it also generalizes well with varying numbers of recommendation requests.",
"title": ""
},
{
"docid": "0cae4ea322daaaf33a42427b69e8ba9f",
"text": "Background--By leveraging cloud services, organizations can deploy their software systems over a pool of resources. However, organizations heavily depend on their business-critical systems, which have been developed over long periods. These legacy applications are usually deployed on-premise. In recent years, research in cloud migration has been carried out. However, there is no secondary study to consolidate this research. Objective--This paper aims to identify, taxonomically classify, and systematically compare existing research on cloud migration. Method--We conducted a systematic literature review (SLR) of 23 selected studies, published from 2010 to 2013. We classified and compared the selected studies based on a characterization framework that we also introduce in this paper. Results--The research synthesis results in a knowledge base of current solutions for legacy-to-cloud migration. This review also identifies research gaps and directions for future research. Conclusion--This review reveals that cloud migration research is still in early stages of maturity, but is advancing. It identifies the needs for a migration framework to help improving the maturity level and consequently trust into cloud migration. This review shows a lack of tool support to automate migration tasks. This study also identifies needs for architectural adaptation and self-adaptive cloud-enabled systems.",
"title": ""
},
{
"docid": "ed769b97bea6d4bbe7e282ad6dbb1c67",
"text": "Three basic switching structures are defined: one is formed by two capacitors and three diodes; the other two are formed by two inductors and two diodes. They are inserted in either a Cuk converter, or a Sepic, or a Zeta converter. The SC/SL structures are built in such a way as when the active switch of the converter is on, the two inductors are charged in series or the two capacitors are discharged in parallel. When the active switch is off, the two inductors are discharged in parallel or the two capacitors are charged in series. As a result, the line voltage is reduced more times than in classical Cuk/Sepic/Zeta converters. The steady-state analysis of the new converters, a comparison of the DC voltage gain and of the voltage and current stresses of the new hybrid converters with those of the available quadratic converters, and experimental results are given",
"title": ""
},
{
"docid": "9dde89f24f55602e21823620b49633dd",
"text": "Darier's disease is a rare late-onset genetic disorder of keratinisation. Mosaic forms of the disease characterised by localised and unilateral keratotic papules carrying post-zygotic ATP2A2 mutation in affected areas have been documented. Segmental forms of Darier's disease are classified into two clinical subtypes: type 1 manifesting with distinct lesions on a background of normal appearing skin and type 2 with well-defined areas of Darier's disease occurring on a background of less severe non-mosaic phenotype. Herein we describe two cases of type 1 segmental Darier's disease with favourable response to topical retinoids.",
"title": ""
},
{
"docid": "748de94434343888e10d1895eef4c805",
"text": "It may seem precursory to begin my essay with an image that has been haunting me since September 11, but in a concise way it prefigures many issues I will write about. It is an image of a work by the Chilean artist and poet, Cecilia Vicuna, who has been living in New York for many years. The work dates from 1981 and shows a word drawn in three colors of pigment (white, blue, and red—also the colors of the American flag) on the pavement of the West Side Highway with the World Trade Center on the horizon. It reads in Spanish: Parti si passion (Participation), which Vicuna unravels as “to say yes in passion, or to partake of suffering.” Revealing aspects of connectivity and compassion, the word spelled out on the road was as ephemeral as its meaning remained for the art world. Unnoticed and unacknowledged, it disappeared in dust, but becomes intelligible in the present. This precarious work tells us how certain art practices, in their continuous effort to press forward a different perception of the world, have a visionary content that is marginalized because of fixed conventional readings of art, but nevertheless is bound to be recognized. My essay will seek to uncover the latency of this kind of work in the second half of the twentieth century, which is slowly coming into being today and can be understood because of feminist practice, but also perhaps because of an abruptly changed reality. Focusing on the work of two SouthAmerican artists—Lygia Clark (1920-1988) and Anna Maria Maiolino (born in 1942), both from Brazil—I will address the topic of “the relational as the radical.”",
"title": ""
},
{
"docid": "24c2877aff9c4e8441dbbbd4481370b6",
"text": "Ramp merging is a critical maneuver for road safety and traffic efficiency. Most of the current automated driving systems developed by multiple automobile manufacturers and suppliers are typically limited to restricted access freeways only. Extending the automated mode to ramp merging zones presents substantial challenges. One is that the automated vehicle needs to incorporate a future objective (e.g. a successful and smooth merge) and optimize a long-term reward that is impacted by subsequent actions when executing the current action. Furthermore, the merging process involves interaction between the merging vehicle and its surrounding vehicles whose behavior may be cooperative or adversarial, leading to distinct merging countermeasures that are crucial to successfully complete the merge. In place of the conventional rule-based approaches, we propose to apply reinforcement learning algorithm on the automated vehicle agent to find an optimal driving policy by maximizing the long-term reward in an interactive driving environment. Most importantly, in contrast to most reinforcement learning applications in which the action space is resolved as discrete, our approach treats the action space as well as the state space as continuous without incurring additional computational costs. Our unique contribution is the design of the Q-function approximation whose format is structured as a quadratic function, by which simple but effective neural networks are used to estimate its coefficients. The results obtained through the implementation of our training platform demonstrate that the vehicle agent is able to learn a safe, smooth and timely merging policy, indicating the effectiveness and practicality of our approach.",
"title": ""
},
{
"docid": "8a1adea9a1f4beeb704691d76b2e4f53",
"text": "As we observe a trend towards the recentralisation of the Internet, this paper raises the question of guaranteeing an everlasting decentralisation. We introduce the properties of strong and soft uncentralisability in order to describe systems in which all authorities can be untrusted at any time without affecting the system. We link the soft uncentralisability to another property called perfect forkability. Using that knowledge, we introduce a new cryptographic primitive called uncentralisable ledger and study its properties. We use those properties to analyse what an uncentralisable ledger may offer to classic electronic voting systems and how it opens up the realm of possibilities for completely new voting mechanisms. We review a list of selected projects that implement voting systems using blockchain technology. We then conclude that the true revolutionary feature enabled by uncentralisable ledgers is a self-sovereign and distributed identity provider.",
"title": ""
},
{
"docid": "8e805d92cb57ddd36cf37eb608fddd71",
"text": "Cogging torque in permanent-magnet machines causes torque and speed ripples, as well as acoustic noise and vibration, especially in low speed and direct drive applications. In this paper, a general analytical expression for cogging torque is derived by the energy method and the Fourier series analysis, based on the air gap permeance and the flux density distribution in an equivalent slotless machine. The optimal design parameters, such as slot number and pole number combination, skewing, pole-arc to pole-pitch ratio, and slot opening, are derived analytically to minimize the cogging torque. Finally, the finite-element analysis is adopted to verify the correctness of analytical methods.",
"title": ""
}
] |
scidocsrr
|
e5293b67d91dad5e4ed00f3bb89f6425
|
Detecting patterns of anomalies
|
[
{
"docid": "3df95e4b2b1bb3dc80785b25c289da92",
"text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.",
"title": ""
}
] |
[
{
"docid": "be4fbfdde6ec503bebd5b2a8ddaa2820",
"text": "Attack-defence Capture The Flag (CTF) competitions are effective pedagogic platforms to teach secure coding practices due to the interactive and real-world experiences they provide to the contest participants. Two of the key challenges that prevent widespread adoption of such contests are: 1) The game infrastructure is highly resource intensive requiring dedication of significant hardware resources and monitoring by organizers during the contest and 2) the participants find the gameplay to be complicated, requiring performance of multiple tasks that overwhelms inexperienced players. In order to address these, we propose a novel attack-defence CTF game infrastructure which uses application containers. The results of our work showcase effectiveness of these containers and supporting tools in not only reducing the resources organizers need but also simplifying the game infrastructure. The work also demonstrates how the supporting tools can be leveraged to help participants focus more on playing the game i.e. attacking and defending services and less on administrative tasks. The results from this work indicate that our architecture can accommodate over 150 teams with 15 times fewer resources when compared to existing infrastructures of most contests today.",
"title": ""
},
{
"docid": "4540c8ed61e6c8ab3727eefc9a048377",
"text": "Network Functions Virtualization (NFV) is incrementally deployed by Internet Service Providers (ISPs) in their carrier networks, by means of Virtual Network Function (VNF) chains, to address customers' demands. The motivation is the increasing manageability, reliability and performance of NFV systems, the gains in energy and space granted by virtualization, at a cost that becomes competitive with respect to legacy physical network function nodes. From a network optimization perspective, the routing of VNF chains across a carrier network implies key novelties making the VNF chain routing problem unique with respect to the state of the art: the bitrate of each demand flow can change along a VNF chain, the VNF processing latency and computing load can be a function of the demands traffic, VNFs can be shared among demands, etc. In this paper, we provide an NFV network model suitable for ISP operations. We define the generic VNF chain routing optimization problem and devise a mixed integer linear programming formulation. By extensive simulation on realistic ISP topologies, we draw conclusions on the trade-offs achievable between legacy Traffic Engineering (TE) ISP goals and novel combined TE-NFV goals.",
"title": ""
},
{
"docid": "ff572d9c74252a70a48d4ba377f941ae",
"text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.",
"title": ""
},
{
"docid": "73e2738994b78d54d8fbad5df4622451",
"text": "Although online consumer reviews (OCR) have helped consumers to know about the strengths and weaknesses of different products and find the ones that best suit their needs, they introduce a challenge for businesses to analyze them because of their volume, variety, velocity and veracity. This research investigates the predictors of readership and helpfulness of OCR using a sentiment mining approach for big data analytics. Our findings show that reviews with higher levels of positive sentiment in the title receive more readerships. Sentimental reviews with neutral polarity in the text are also perceived to be more helpful. The length and longevity of a review positively influence both its readership and helpfulness. Because the current methods used for sorting OCR may bias both their readership and helpfulness, the approach used in this study can be adopted by online vendors to develop scalable automated systems for sorting and classification of big OCR data which will benefit both vendors and consumers.",
"title": ""
},
{
"docid": "fc6f02a4eb006efe54b34b1705559a55",
"text": "Company movements and market changes often are headlines of the news, providing managers with important business intelligence (BI). While existing corporate analyses are often based on numerical financial figures, relatively little work has been done to reveal from textual news articles factors that represent BI. In this research, we developed BizPro, an intelligent system for extracting and categorizing BI factors from news articles. BizPro consists of novel text mining procedures and BI factor modeling and categorization. Expert guidance and human knowledge (with high inter-rater reliability) were used to inform system development and profiling of BI factors. We conducted a case study of using the system to profile BI factors of four major IT companies based on 6859 sentences extracted from 231 news articles published in major news sources. The results show that the chosen techniques used in BizPro – Naïve Bayes (NB) and Logistic Regression (LR) – significantly outperformed a benchmark technique. NB was found to outperform LR in terms of precision, recall, F-measure, and area under ROC curve. This research contributes to developing a new system for profiling company BI factors from news articles, to providing new empirical findings to enhance understanding in BI factor extraction and categorization, and to addressing an important yet under-explored concern of BI analysis. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dc4d11c0478872f3882946580bb10572",
"text": "An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define \"neurosecurity\"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.",
"title": ""
},
{
"docid": "2653554c6dec7e9cfa0f5a4080d251e2",
"text": "Clustering is a key technique within the KDD process, with k-means, and the more general k-medoids, being well-known incremental partition-based clustering algorithms. A fundamental issue within this class of algorithms is to find an initial set of medians (or medoids) that improves the efficiency of the algorithms (e.g., accelerating its convergence to a solution), at the same time that it improves its effectiveness (e.g., finding more meaningful clusters). Thus, in this article we aim at providing a technique that, given a set of elements, quickly finds a very small number of elements as medoid candidates for this set, allowing to improve both the efficiency and effectiveness of existing clustering algorithms. We target the class of k-medoids algorithms in general, and propose a technique that selects a well-positioned subset of central elements to serve as the initial set of medoids for the clustering process. Our technique leads to a substantially smaller amount of distance calculations, thus improving the algorithm’s efficiency when compared to existing methods, without sacrificing effectiveness. A salient feature of our proposed technique is that it is not a new k-medoid clustering algorithm per se, rather, it can be used in conjunction with any existing clustering algorithm that is based on the k-medoid paradigm. Experimental results, using both synthetic and real datasets, confirm the efficiency, effectiveness and scalability of the proposed technique.",
"title": ""
},
{
"docid": "abf6f1218543ce69b0095bba24f40ced",
"text": "Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.",
"title": ""
},
{
"docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76",
"text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e77cf8938714824d46cfdbdb1b809f93",
"text": "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.",
"title": ""
},
{
"docid": "b85df2aec85417d45b251299dfce4f39",
"text": "A growing body of studies is developing approaches to evaluating human interaction with Web search engines, including the usability and effectiveness of Web search tools. This study explores a user-centered approach to the evaluation of the Web search engine Inquirus – a Web metasearch tool developed by researchers from the NEC Research Institute. The goal of the study reported in this paper was to develop a user-centered approach to the evaluation including: (1) effectiveness: based on the impact of users' interactions on their information problem and information seeking stage, and (2) usability: including screen layout and system capabilities for users. Twenty-two (22) volunteers searched Inquirus on their own personal information topics. Data analyzed included: (1) user preand post-search questionnaires and (2) Inquirus search transaction logs. Key findings include: (1) Inquirus was rated highly by users on various usability measures, (2) all users experienced some level of shift/change in their information problem, information seeking, and personal knowledge due to their Inquirus interaction, (3) different users experienced different levels of change/shift, and (4) the search measure precision did not correlate with other user-based measures. Some users experienced major changes/shifts in various userbased variables, such as information problem or information seeking stage with a search of low precision and vice versa. Implications for the development of user-centered approaches to the evaluation of Web and IR systems and further research are discussed.",
"title": ""
},
{
"docid": "e4b54824b2528b66e28e82ad7d496b36",
"text": "Objective: In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. Methods: The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Results: Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Conclusion: Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients’ heterogeneity. Significance: The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.",
"title": ""
},
{
"docid": "024265b0b1872dd89d875dd5d3df5b78",
"text": "In this paper, we present a novel system to analyze human body motions for action recognition task from two sets of features using RGBD videos. The Bag-of-Features approach is used for recognizing human action by extracting local spatialtemporal features and shape invariant features from all video frames. These feature vectors are computed in four steps: Firstly, detecting all interest keypoints from RGB video frames using Speed-Up Robust Features and filters motion points using Motion History Image and Optical Flow, then aligned these motion points to the depth frame sequences. Secondly, using a Histogram of orientation gradient descriptor for computing the features vector around these points from both RGB and depth channels, then combined these feature values in one RGBD feature vector. Thirdly, computing Hu-Moment shape features from RGBD frames, fourthly, combining the HOG features with Hu-moments features in one feature vector for each video action. Finally, the k-means clustering and the multi-class K-Nearest Neighbor is used for the classification task. This system is invariant to scale, rotation, translation, and illumination. All tested are utilized on a dataset that is available to the public and used often in the community. By using this new feature combination method improves performance on actions with low movement and reach recognition rates superior to other publications of the dataset. Keywords—RGBD Videos; Feature Extraction; k-means Clustering; KNN (K-Nearest Neighbor)",
"title": ""
},
{
"docid": "cf817c1802b65f93e5426641a5ea62e2",
"text": "To protect sensitive data processed by current applications, developers, whether security experts or not, have to rely on cryptography. While cryptography algorithms have become increasingly advanced, many data breaches occur because developers do not correctly use the corresponding APIs. To guide future research into practical solutions to this problem, we perform an empirical investigation into the obstacles developers face while using the Java cryptography APIs, the tasks they use the APIs for, and the kind of (tool) support they desire. We triangulate data from four separate studies that include the analysis of 100 StackOverflow posts, 100 GitHub repositories, and survey input from 48 developers. We find that while developers find it difficult to use certain cryptographic algorithms correctly, they feel surprisingly confident in selecting the right cryptography concepts (e.g., encryption vs. signatures). We also find that the APIs are generally perceived to be too low-level and that developers prefer more task-based solutions.",
"title": ""
},
{
"docid": "77f7644a5e2ec50b541fe862a437806f",
"text": "This paper describes SRM (Scalable Reliable Multicast), a reliable multicast framework for application level framing and light-weight sessions. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The framework has been prototyped in wb, a distributed whiteboard application, and has been extensively tested on a global scale with sessions ranging from a few to more than 1000 participants. The paper describes the principles that have guided our design, including the IP multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies.",
"title": ""
},
{
"docid": "56667d286f69f8429be951ccf5d61c24",
"text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.",
"title": ""
},
{
"docid": "104c845c9c34e8e94b6e89d651635ae8",
"text": "Three families of Bacillus cyclic lipopeptides--surfactins, iturins, and fengycins--have well-recognized potential uses in biotechnology and biopharmaceutical applications. This study outlines the isolation and characterization of locillomycins, a novel family of cyclic lipopeptides produced by Bacillus subtilis 916. Elucidation of the locillomycin structure revealed several molecular features not observed in other Bacillus lipopeptides, including a unique nonapeptide sequence and macrocyclization. Locillomycins are active against bacteria and viruses. Biochemical analysis and gene deletion studies have supported the assignment of a 38-kb gene cluster as the locillomycin biosynthetic gene cluster. Interestingly, this gene cluster encodes 4 proteins (LocA, LocB, LocC, and LocD) that form a hexamodular nonribosomal peptide synthetase to biosynthesize cyclic nonapeptides. Genome analysis and the chemical structures of the end products indicated that the biosynthetic pathway exhibits two distinct features: (i) a nonlinear hexamodular assembly line, with three modules in the middle utilized twice and the first and last two modules used only once and (ii) several domains that are skipped or optionally selected.",
"title": ""
},
{
"docid": "4b432638ecceac3d1948fb2b2e9be49b",
"text": "Software process refers to the set of tools, methods, and practices used to produce a software artifact. The objective of a software process management model is to produce software artifacts according to plans while simultaneously improving the organization's capability to produce better artifacts. The SEI's Capability Maturity Model (CMM) is a software process management model; it assists organizations to provide the infrastructure for achieving a disciplined and mature software process. There is a growing concern that the CMM is not applicable to small firms because it requires a huge investment. In fact, detailed studies of the CMM show that its applications may cost well over $100,000. This article attempts to address the above concern by studying the feasibility of a scaled-down version of the CMM for use in small software firms. The logic for a scaled-down CMM is that the same quantitative quality control principles that work for larger projects can be scaled-down and adopted for smaller ones. Both the CMM and the Personal Software Process (PSP) are briefly described and are used as basis.",
"title": ""
},
{
"docid": "20a2390dede15514cd6a70e9b56f5432",
"text": "The ability to record and replay program executions with low overhead enables many applications, such as reverse-execution debugging, debugging of hard-toreproduce test failures, and “black box” forensic analysis of failures in deployed systems. Existing record-andreplay approaches limit deployability by recording an entire virtual machine (heavyweight), modifying the OS kernel (adding deployment and maintenance costs), requiring pervasive code instrumentation (imposing significant performance and complexity overhead), or modifying compilers and runtime systems (limiting generality). We investigated whether it is possible to build a practical record-and-replay system avoiding all these issues. The answer turns out to be yes — if the CPU and operating system meet certain non-obvious constraints. Fortunately modern Intel CPUs, Linux kernels and user-space frameworks do meet these constraints, although this has only become true recently. With some novel optimizations, our system RR records and replays real-world lowparallelism workloads with low overhead, with an entirely user-space implementation, using stock hardware, compilers, runtimes and operating systems. RR forms the basis of an open-source reverse-execution debugger seeing significant use in practice. We present the design and implementation of RR, describe its performance on a variety of workloads, and identify constraints on hardware and operating system design required to support our approach.",
"title": ""
},
{
"docid": "a9b366b2b127b093b547f8a10ac05ca5",
"text": "Each user session in an e-commerce system can be modeled as a sequence of web pages, indicating how the user interacts with the system and makes his/her purchase. A typical recommendation approach, e.g., Collaborative Filtering, generates its results at the beginning of each session, listing the most likely purchased items. However, such approach fails to exploit current viewing history of the user and hence, is unable to provide a real-time customized recommendation service. In this paper, we build a deep recurrent neural network to address the problem. The network tracks how users browse the website using multiple hidden layers. Each hidden layer models how the combinations of webpages are accessed and in what order. To reduce the processing cost, the network only records a finite number of states, while the old states collapse into a single history state. Our model refreshes the recommendation result each time when user opens a new web page. As user's session continues, the recommendation result is gradually refined. Furthermore, we integrate the recurrent neural network with a Feedfoward network which represents the user-item correlations to increase the prediction accuracy. Our approach has been applied to Kaola (http://www.kaola.com), an e-commerce website powered by the NetEase technologies. It shows a significant improvement over previous recommendation service.",
"title": ""
}
] |
scidocsrr
|
141c71face17d9bb610db565c06038d3
|
ThingPot: an interactive Internet-of-Things honeypot
|
[
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
}
] |
[
{
"docid": "ec24b9b58bd95b28756e6b9b1796f4de",
"text": "This article surveys contemporary trends in leadership theory as well as its current status and the social context that has shaped the contours of leadership studies. Emphasis is placed on the urgent need for collaboration among social-neuro-cognitive scientists in order to achieve an integrated theory, and the author points to promising leads for accomplishing this. He also asserts that the 4 major threats to world stability are a nuclear/biological catastrophe, a world-wide pandemic, tribalism, and the leadership of human institutions. Without exemplary leadership, solving the problems stemming from the first 3 threats will be impossible.",
"title": ""
},
{
"docid": "3ab1e2768c1f612f1f85ddb192b37e1f",
"text": "The vertical Cup-to-Disc Ratio (CDR) is an important indicator in the diagnosis of glaucoma. Automatic segmentation of the optic disc (OD) and optic cup is crucial towards a good computer-aided diagnosis (CAD) system. This paper presents a statistical model-based method for the segmentation of the optic disc and optic cup from digital color fundus images. The method combines knowledge-based Circular Hough Transform and a novel optimal channel selection for segmentation of the OD. Moreover, we extended the method to optic cup segmentation, which is a more challenging task. The system was tested on a dataset of 325 images. The average Dice coefficient for the disc and cup segmentation is 0.92 and 0.81 respectively, which improves significantly over existing methods. The proposed method has a mean absolute CDR error of 0.10, which outperforms existing methods. The results are promising and thus demonstrate a good potential for this method to be used in a mass screening CAD system.",
"title": ""
},
{
"docid": "df1db7eae960d3b16edb8d001b7b1f22",
"text": "This letter presents a novel approach for providing substrate-integrated waveguide tunable resonators by means of placing an additional metalized via-hole on the waveguide cavity. The via-hole contains an open-loop slot on the top metallic wall. The dimensions, position and orientation of the open-loop slot defines the tuning range. Fabrication of some designs reveals good agreement between simulation and measurements. Additionally, a preliminary prototype which sets the open-loop slot orientation manually is also presented, achieving a continuous tuning range of 8%.",
"title": ""
},
{
"docid": "7b4c5206475d124574c99a5285ca9886",
"text": "The concept of Mars Sample Return (MSR) has been considered since the 1960s and is still a top priority for the planetary science community. [1] Although a plan on the number and types of samples to be collected for MSR has been outlined, as articulated in the Mars 2020 Science Definition Team report [2], the trade space of options to return this sample from the surface of Mars to the surface of the Earth is still being explored. One of the main challenges with MSR is that it is inherently a multi-vehicle system where each vehicle's design impacts that of the others. Defining the trade space must therefore be treated as a System of Systems (SoS) problem. The work presented puts forward a framework to rapidly explore such spatially and temporally distributed systems. It investigates the possible vehicle and technology options for MSR, assuming that a packaged sample has been left on the surface of Mars. It also evaluates how launch sequencing choices affect the expected return on investment of different architectures. The paper explores eight key trades, including different types of landing and propulsion systems, as well as low-cost direct return options. A large set of architectures are compared to the baseline proposed in the Planetary Science Decadal Survey [1] for MSR, which consists of a stationary lander, a small fetch rover, a Mars Ascent Vehicle (MAV), and a return orbiter with chemical propulsion. Overall, the baseline is found to be well optimized, although a few options, including the use of solar electric propulsion and of a roving vehicle carrying the MAV to the sample, are shown to offer a better return on investment. Furthermore, when considering only the goals of MSR, an approach where the lander is sent to Mars at least one launch window ahead of the return orbiter is demonstrated to be preferable.",
"title": ""
},
{
"docid": "b540fb20a265d315503543a5d752f486",
"text": "Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as welldefined quantifiers of a deep network’s expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep convolutional arithmetic circuit in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in the graph which characterizes it. Thus, we demonstrate a direct control over the inductive bias of the designed deep convolutional network via its channel numbers, which we show to be related to this min-cut in the underlying graph. This result is relevant to any practitioner designing a convolutional network for a specific task. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on more common convolutional networks which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.",
"title": ""
},
{
"docid": "1c770d3ffd7b7bf4ba2ea39c5e161168",
"text": "Website: http://hcsi.cs.tsinghua.edu.cn/ • Stress is composed of two key factors: stressor and stress level. • Stressor, comprising of stressor event and stressor subject, triggers stress • Measuring stress via social media is complicated: • Stressor subject detection • Stressor events happening to other subjects can also be someone’s stress trigger. • Social media postings are usually informal and ambiguous • Stressor event detection • Stressor events are correlated • For some specific event categories, such as death, there are insufficient training samples Challenges: Measuring Psychological Stress via Social Media",
"title": ""
},
{
"docid": "758880e02554dd63b92da065742147d5",
"text": "1Department of Computer Science, Faculty of Science and Technology, Universidade Nova de Lisboa, Lisboa, Portugal 2Center for Biomedical Technology, Universidad Politécnica de Madrid, 28223 Pozuelo de Alarcón, Madrid, Spain 3Data, Networks and Cybersecurity Research Institute, Univ. Rey Juan Carlos, 28028 Madrid, Spain 4Department of Applied Mathematics, Universidad Rey Juan Carlos, 28933 Móstoles, Madrid, Spain 5Center for Computational Simulation, 28223 Pozuelo de Alarcón, Madrid, Spain 6Cyber Security & Digital Trust, BBVA Group, 28050 Madrid, Spain",
"title": ""
},
{
"docid": "4453c85d0fc1513e9657731d84896864",
"text": "A number of studies have looked at the prevalence rates of psychiatric disorders in the community in Pakistan over the last two decades. However, a very little information is available on psychiatric morbidity in primary health care. We therefore decided to measure prevalence of psychiatric disorders and their correlates among women from primary health care facilities in Lahore. We interviewed 650 women in primary health care settings in Lahore. We used a semi-structured interview and questionnaires to collect information during face-to-face interviews. Nearly two-third of the women (64.3%) in our study were diagnosed to have a psychiatric problem, while one-third (30.4%) suffered with Major Depressive Disorder. Stressful life events, verbal violence and battering were positively correlated with psychiatric morbidity and social support, using reasoning to resolve conflicts and education were negatively correlated with psychiatric morbidity. The prevalence of psychiatric disorders is in line with the prevalence figures found in community studies. Domestic violence is an important correlate which can be the focus of interventions.",
"title": ""
},
{
"docid": "17dce24f26d7cc196e56a889255f92a8",
"text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.",
"title": ""
},
{
"docid": "755f7d663e813d7450089fc0d7058037",
"text": "This paper presents a new approach for learning in structured domains (SDs) using a constructive neural network for graphs (NN4G). The new model allows the extension of the input domain for supervised neural networks to a general class of graphs including both acyclic/cyclic, directed/undirected labeled graphs. In particular, the model can realize adaptive contextual transductions, learning the mapping from graphs for both classification and regression tasks. In contrast to previous neural networks for structures that had a recursive dynamics, NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. Moreover, the incremental approach eliminates the need to introduce cyclic dependencies in the definition of the system state variables. In the traversal process, the NN4G units exploit (local) contextual information of the graphs vertices. In spite of the simplicity of the approach, we show that, through the compositionality of the contextual information developed by the learning, the model can deal with contextual information that is incrementally extended according to the graphs topology. The effectiveness and the generality of the new approach are investigated by analyzing its theoretical properties and providing experimental results.",
"title": ""
},
{
"docid": "fb763a2142bd744cc61718939054747f",
"text": "A new method of image transmission and cryptography on the basis of Mobius transformation is proposed in this paper. Based on the Mobius transformation, the method of modulation and demodulation in Chen-Mobius communication system, which is quite different from the traditional one, is applied in the image transmission and cryptography. To make such a processing, the Chen-Mobius inverse transformed functions act as the “modulation” waveforms and the receiving end is coherently “demodulated” by the often-used digital waveforms. Simulation results are discussed in some detail. It shows that the new application has excellent performances that the digital image signals can be restored from intense noise and encrypted ones.",
"title": ""
},
{
"docid": "bc1f7e30b8dcef97c1d8de2db801c4f6",
"text": "In this paper a novel method is introduced based on the use of an unsupervised version of kernel least mean square (KLMS) algorithm for solving ordinary differential equations (ODEs). The algorithm is unsupervised because here no desired signal needs to be determined by user and the output of the model is generated by iterating the algorithm progressively. However, there are several new implementation, fast convergence and also little error. Furthermore, it is also a KLMS with obvious characteristics. In this paper the ability of KLMS is used to estimate the answer of ODE. First a trial solution of ODE is written as a sum of two parts, the first part satisfies the initial condition and the second part is trained using the KLMS algorithm so as the trial solution solves the ODE. The accuracy of the method is illustrated by solving several problems. Also the sensitivity of the convergence is analyzed by changing the step size parameters and kernel functions. Finally, the proposed method is compared with neuro-fuzzy [21] approach. Crown Copyright & 2011 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f51a1451a1e011bd7fee3cc70f549f84",
"text": "The present study examined relations among neighborhood structural and social characteristics, parenting practices, peer group affiliations, and delinquency among a group of serious adolescent offenders. The sample of 14-18-year-old boys (N=488) was composed primarily of economically disadvantaged, ethnic-minority youth living in urban communities. The results indicate that weak neighborhood social organization is indirectly related to delinquency through its associations with parenting behavior and peer deviance and that a focus on just 1 of these microsystems can lead to oversimplified models of risk for juvenile offending. The authors also find that community social ties may confer both pro- and antisocial influences to youth, and they advocate for a broad conceptualization of neighborhood social processes as these relate to developmental risk for youth living in disadvantaged communities.",
"title": ""
},
{
"docid": "85cfda0c6a2964d342035b45d2ad47ab",
"text": "Distributed Denial of Service (DDoS) attacks grow rapidly and become one of the fatal threats to the Internet. Automatically detecting DDoS attack packets is one of the main defense mechanisms. Conventional solutions monitor network traffic and identify attack activities from legitimate network traffic based on statistical divergence. Machine learning is another method to improve identifying performance based on statistical features. However, conventional machine learning techniques are limited by the shallow representation models. In this paper, we propose a deep learning based DDoS attack detection approach (DeepDefense). Deep learning approach can automatically extract high-level features from low-level ones and gain powerful representation and inference. We design a recurrent deep neural network to learn patterns from sequences of network traffic and trace network attack activities. The experimental results demonstrate a better performance of our model compared with conventional machine learning models. We reduce the error rate from 7.517% to 2.103% compared with conventional machine learning method in the larger data set.",
"title": ""
},
{
"docid": "5b79a4fcedaebf0e64b7627b2d944e22",
"text": "Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The network replicates itself by learning to output its own weights. The network is designed using a loss function that can be optimized with either gradient-based or nongradient-based methods. We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters. The best solution for a self-replicating network was found by alternating between regeneration and optimization steps. Finally, we describe a design for a self-replicating neural network that can solve an auxiliary task such as MNIST image classification. We observe that there is a trade-off between the network’s ability to classify images and its ability to replicate, but training is biased towards increasing its specialization at image classification at the expense of replication. This is analogous to the trade-off between reproduction and other tasks observed in nature. We suggest that a selfreplication mechanism for artificial intelligence is useful because it introduces the possibility of continual improvement through natural selection.",
"title": ""
},
{
"docid": "b0385e647424e56cb8b854a4d78dd762",
"text": "Trustworthy operation of industrial control systems depends on secure and real-time code execution on the embedded programmable logic controllers (PLCs). The controllers monitor and control the critical infrastructures, such as electric power grids and healthcare platforms, and continuously report back the system status to human operators. We present Zeus, a contactless embedded controller security monitor to ensure its execution control flow integrity. Zeus leverages the electromagnetic emission by the PLC circuitry during the execution of the controller programs. Zeus's contactless execution tracking enables non-intrusive monitoring of security-critical controllers with tight real-time constraints. Those devices often cannot tolerate the cost and performance overhead that comes with additional traditional hardware or software monitoring modules. Furthermore, Zeus provides an air-gap between the monitor (trusted computing base) and the target (potentially compromised) PLC. This eliminates the possibility of the monitor infection by the same attack vectors.\n Zeus monitors for control flow integrity of the PLC program execution. Zeus monitors the communications between the human machine interface and the PLC, and captures the control logic binary uploads to the PLC. Zeus exercises its feasible execution paths, and fingerprints their emissions using an external electromagnetic sensor. Zeus trains a neural network for legitimate PLC executions, and uses it at runtime to identify the control flow based on PLC's electromagnetic emissions. We implemented Zeus on a commercial Allen Bradley PLC, which is widely used in industry, and evaluated it on real-world control program executions. Zeus was able to distinguish between different legitimate and malicious executions with 98.9% accuracy and with zero overhead on PLC execution by design.",
"title": ""
},
{
"docid": "45a4b46a303d120838037328e6606a5d",
"text": "Learned models composed of probabilistic logical rules are useful for many tasks, such as knowledge base completion. Unfortunately this learning problem is difficult, since determining the structure of the theory normally requires solving a discrete optimization problem. In this paper, we propose an alternative approach: a completely differentiable model for learning sets of first-order rules. The approach is inspired by a recently-developed differentiable logic, i.e. a subset of first-order logic for which inference tasks can be compiled into sequences of differentiable operations. Here we describe a neural controller system which learns how to sequentially compose the these primitive differentiable operations to solve reasoning tasks, and in particular, to perform knowledge base completion. The long-term goal of this work is to develop integrated, end-to-end systems that can learn to perform high-level logical reasoning as well as lower-level perceptual tasks.",
"title": ""
},
{
"docid": "6e1de72d7882139c7b94f762bd45424b",
"text": "We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks. Different from many existing toolkits that are specialized for specific applications (e.g., neural machine translation), Texar is designed to be highly flexible and versatile. This is achieved by abstracting the common patterns underlying the diverse tasks and methodologies, creating a library of highly reusable modules and functionalities, and enabling arbitrary model architectures and various algorithmic paradigms. The features make Texar particularly suitable for technique sharing and generalization across different text generation applications. The toolkit emphasizes heavily on extensibility and modularized system design, so that components can be freely plugged in or swapped out. We conduct extensive experiments and case studies to demonstrate the use and advantage of the toolkit.",
"title": ""
},
{
"docid": "b0de8371b0f5bfcecd8370bb0fdac174",
"text": "We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis. We show that both hinge on the question of understanding the complexity of the following problem, which we call PosSLP; given a division-free straight-line program producing an integer N, decide whether N > 0. We show that PosSLP lies in the counting hierarchy, and combining our results with work of Tiwari, we show that the Euclidean traveling salesman problem lies in the counting hierarchy - the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE",
"title": ""
},
{
"docid": "7adbcbcf5d458087d6f261d060e6c12b",
"text": "Operation of MOS devices in the strong, moderate, and weak inversion regions is considered. The advantages of designing the input differential stage of a CMOS op amp to operate in the weak or moderate inversion region are presented. These advantages include higher voltage gain, less distortion, and ease of compensation. Specific design guidelines are presented to optimize amplifier performance. Simulations that demonstrate the expected improvements are given.",
"title": ""
}
] |
scidocsrr
|
a4083eef0dba8b7853624cc18373d1e8
|
A cloud robot system using the dexterity network and berkeley robotics and automation as a service (Brass)
|
[
{
"docid": "1eca0e6a170470a483dc25196e6cca63",
"text": "Benchmarks for Cloud Robotics",
"title": ""
}
] |
[
{
"docid": "fec4b030280f228c2568c4a5eccbac28",
"text": "Distillation columns with a high-purity product (down to 7 ppm) have been studied. A steady state m odel is developed using a commercial process simulator. The model is validated against industrial data. Based on the mod el, three major optimal operational changes are identified. T hese are, lowering the location of the feed & side draw strea ms, increasing the pressure at the top of the distillat ion column and changing the configuration of the products draw. It is estimated that these three changes will increase th e throughput of each column by ~5%. The validated model is also u ed to quantify the effects on key internal column paramet ers such as the flooding factor, in the event of significant ch anges to product purity and throughput. Keywordshigh-purity distillation columns; steady state model, operating condition optimization",
"title": ""
},
{
"docid": "731df77ded13276e7bdb9f67474f3810",
"text": "Given a graph <i>G</i> = (<i>V,E</i>) and positive integral vertex weights <i>w</i> : <i>V</i> → N, the <i>max-coloring problem</i> seeks to find a proper vertex coloring of <i>G</i> whose color classes <i>C</i><inf>1,</inf> <i>C</i><inf>2,</inf>...,<i>C</i><inf><i>k</i></inf>, minimize Σ<sup><i>k</i></sup><inf><i>i</i> = 1</inf> <i>max</i><inf>ν∈<i>C</i><inf>i</inf></inf><i>w</i>(ν). This problem, restricted to interval graphs, arises whenever there is a need to design dedicated memory managers that provide better performance than the general purpose memory management of the operating system. Specifically, companies have tried to solve this problem in the design of memory managers for wireless protocol stacks such as GPRS or 3G.Though this problem seems similar to the wellknown dynamic storage allocation problem, we point out fundamental differences. We make a connection between max-coloring and on-line graph coloring and use this to devise a simple 2-approximation algorithm for max-coloring on interval graphs. We also show that a simple first-fit strategy, that is a natural choice for this problem, yields a 10-approximation algorithm. We show this result by proving that the first-fit algorithm for on-line coloring an interval graph <i>G</i> uses no more than 10.<i>x</i>(<i>G</i>) colors, significantly improving the bound of 26.<i>x</i>(<i>G</i>) by Kierstead and Qin (<i>Discrete Math.</i>, 144, 1995). We also show that the max-coloring problem is NP-hard.",
"title": ""
},
{
"docid": "417186e59f537a0f6480fc7e05eafb0c",
"text": "Retrieving correct answers for non-factoid queries poses significant challenges for current answer retrieval methods. Methods either involve the laborious task of extracting numerous features or are ineffective for longer answers. We approach the task of non-factoid question answering using deep learning methods without the need of feature extraction. Neural networks are capable of learning complex relations based on relatively simple features which make them a prime candidate for relating non-factoid questions to their answers. In this paper, we show that end to end training with a Bidirectional Long Short Term Memory (BLSTM) network with a rank sensitive loss function results in significant performance improvements over previous approaches without the need for combining additional models.",
"title": ""
},
{
"docid": "55772e55adb83d4fd383ddebcf564a71",
"text": "The generation of multi-functional drug delivery systems, namely solid dosage forms loaded with nano-sized carriers, remains little explored and is still a challenge for formulators. For the first time, the coupling of two important technologies, 3D printing and nanotechnology, to produce innovative solid dosage forms containing drug-loaded nanocapsules was evaluated here. Drug delivery devices were prepared by fused deposition modelling (FDM) from poly(ε-caprolactone) (PCL) and Eudragit® RL100 (ERL) filaments with or without a channelling agent (mannitol). They were soaked in deflazacort-loaded nanocapsules (particle size: 138nm) to produce 3D printed tablets (printlets) loaded with them, as observed by SEM. Drug loading was improved by the presence of the channelling agent and a linear correlation was obtained between the soaking time and the drug loading (r2=0.9739). Moreover, drug release profiles were dependent on the polymeric material of tablets and the presence of the channelling agent. In particular, tablets prepared with a partially hollow core (50% infill) had a higher drug loading (0.27% w/w) and faster drug release rate. This study represents an original approach to convert nanocapsules suspensions into solid dosage forms as well as an efficient 3D printing method to produce novel drug delivery systems, as personalised nanomedicines.",
"title": ""
},
{
"docid": "11747931101b7dd3fed01380396b8fa5",
"text": "Unsupervised word translation from nonparallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Our simple linear method is able to achieve better or equal performance to recent state-of-theart deep adversarial approaches and typically does a little better than the supervised baseline. Our method is also efficient, easy to parallelize and interpretable.",
"title": ""
},
{
"docid": "773c4a4640d587e58cf80c9371ad20fc",
"text": "Building automation systems are traditionally concerned with the control of heating, ventilation, and air conditioning, as well as lighting and shading, systems. They have their origin in a time where security has been considered as a side issue at best. Nowadays, with the rising desire to integrate security-critical services that were formerly provided by isolated subsystems, security must no longer be neglected. Thus, the development of a comprehensive security concept is of utmost importance. This paper starts with a security threat analysis and identifies the challenges of providing security in the building automation domain. Afterward, the security mechanisms of available standards are thoroughly analyzed. Finally, two approaches that provide both secure communication and secure execution of possibly untrusted control applications are presented.",
"title": ""
},
{
"docid": "c5d2238833ab8332a71b64010f034970",
"text": "Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities.",
"title": ""
},
{
"docid": "fc63dbad7a3c6769ee1a1df19da6e235",
"text": "For global companies that compete in high-velocity industries, business strategies and initiatives change rapidly, and thus the CIO struggles to keep the IT organization aligned with a moving target. In this paper we report on research-in-progress that focuses on how the CIO attempts to meet this challenge. Specifically, we are conducting case studies to closely examine how toy industry CIOs develop their IT organizations’ assets, competencies, and dynamic capabilities in alignment with their companies’ evolving strategy and business priorities (which constitute the “moving target”). We have chosen to study toy industry CIOs, because their companies compete in a global, high-velocity environment, yet this industry has been largely overlooked by the information systems research community. Early findings reveal that four IT application areas are seen as holding strong promise: supply chain management, knowledge management, data mining, and eCommerce, and that toy CIO’s are attempting to both cope with and capitalize on the current financial crisis by more aggressively pursuing offshore outsourcing than heretofore. We conclude with a discussion of next steps as the study proceeds.",
"title": ""
},
{
"docid": "d3fcda423467ef93f37ef2b7dbe9be13",
"text": "The Java programming language [1,3] from its inception has been publicized as a web programming language. Many programmers have developed simple applications such as games, clocks, news tickers and stock tickers in order to create informative, innovative web sites. However, it is important to note that the Java programming language possesses much more capability. The language components and constructs originally designed to enhance the functionality of Java as a web-based programming language can be utilized in a broader extent. Java provides a developer with the tools allowing for the creation of innovative network, database, and Graphical User Interface (GUI) applications. In fact, Java and its associated technologies such as JDBC API [11,5], JDBC drivers [2,12], threading [10], and AWT provide the programmer with the much-needed assistance for the development of platform-independent database-independent interfaces. Thus, it is possible to build a graphical database interface capable of connecting and querying distributed databases [13,14]. Here are components that are important for building the database interface we have in mind.",
"title": ""
},
{
"docid": "5ee21318b1601a1d42162273a7c9026c",
"text": "We used a knock-in strategy to generate two lines of mice expressing Cre recombinase under the transcriptional control of the dopamine transporter promoter (DAT-cre mice) or the serotonin transporter promoter (SERT-cre mice). In DAT-cre mice, immunocytochemical staining of adult brains for the dopamine-synthetic enzyme tyrosine hydroxylase and for Cre recombinase revealed that virtually all dopaminergic neurons in the ventral midbrain expressed Cre. Crossing DAT-cre mice with ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice revealed a near perfect correlation between staining for tyrosine hydroxylase and beta-galactosidase or YFP. YFP-labeled fluorescent dopaminergic neurons could be readily identified in live slices. Crossing SERT-cre mice with the ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice similarly revealed a near perfect correlation between staining for serotonin-synthetic enzyme tryptophan hydroxylase and beta-galactosidase or YFP. Additional Cre expression in the thalamus and cortex was observed, reflecting the known pattern of transient SERT expression during early postnatal development. These findings suggest a general strategy of using neurotransmitter transporter promoters to drive selective Cre expression and thus control mutations in specific neurotransmitter systems. Crossed with fluorescent-gene reporters, this strategy tags neurons by neurotransmitter status, providing new tools for electrophysiology and imaging.",
"title": ""
},
{
"docid": "6379e89db7d9063569a342ef2056307a",
"text": "Grounded Theory is a research method that generates theory from data and is useful for understanding how people resolve problems that are of concern to them. Although the method looks deceptively simple in concept, implementing Grounded Theory research can often be confusing in practice. Furthermore, despite many papers in the social science disciplines and nursing describing the use of Grounded Theory, there are very few examples and relevant guides for the software engineering researcher. This paper describes our experience using classical (i.e., Glaserian) Grounded Theory in a software engineering context and attempts to interpret the canons of classical Grounded Theory in a manner that is relevant to software engineers. We provide model to help the software engineering researchers interpret the often fuzzy definitions found in Grounded Theory texts and share our experience and lessons learned during our research. We summarize these lessons learned in a set of fifteen guidelines.",
"title": ""
},
{
"docid": "9df0df8eb4f71d8c6952e07a179b2ec4",
"text": "In interpersonal interactions, speech and body gesture channels are internally coordinated towards conveying communicative intentions. The speech-gesture relationship is influenced by the internal emotion state underlying the communication. In this paper, we focus on uncovering the emotional effect on the interrelation between speech and body gestures. We investigate acoustic features describing speech prosody (pitch and energy) and vocal tract configuration (MFCCs), as well as three types of body gestures, viz., head motion, lower and upper body motions. We employ mutual information to measure the coordination between the two communicative channels, and analyze the quantified speech-gesture link with respect to distinct levels of emotion attributes, i.e., activation and valence. The results reveal that the speech-gesture coupling is generally tighter for low-level activation and high-level valence, compared to high-level activation and low-level valence. We further propose a framework for modeling the dynamics of speech-gesture interaction. Experimental studies suggest that such quantified coupling representations can well discriminate different levels of activation and valence, reinforcing that emotions are encoded in the dynamics of the multimodal link. We also verify that the structures of the coupling representations are emotiondependent using subspace-based analysis.",
"title": ""
},
{
"docid": "5010761051983f5de1f18a11d477f185",
"text": "Financial forecasting has been challenging problem due to its high non-linearity and high volatility. An Artificial Neural Network (ANN) can model flexible linear or non-linear relationship among variables. ANN can be configured to produce desired set of output based on set of given input. In this paper we attempt at analyzing the usefulness of artificial neural network for forecasting financial data series with use of different algorithms such as backpropagation, radial basis function etc. With their ability of adapting non-linear and chaotic patterns, ANN is the current technique being used which offers the ability of predicting financial data more accurately. \"A x-y-1 network topology is adopted because of x input variables in which variable y was determined by the number of hidden neurons during network selection with single output.\" Both x and y were changed.",
"title": ""
},
{
"docid": "05f941acd4b2bd1188c7396d7edbd684",
"text": "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. 1998 ACM Subject Classification C.2.4 Distributed Systems, D.1.3 Concurrent Programming",
"title": ""
},
{
"docid": "d5941d8af75741a9ee3a1e49eb3177ea",
"text": "The description of sphero-cylinder lenses is approached from the viewpoint of Fourier analysis of the power profile. It is shown that the familiar sine-squared law leads naturally to a Fourier series representation with exactly three Fourier coefficients, representing the natural parameters of a thin lens. The constant term corresponds to the mean spherical equivalent (MSE) power, whereas the amplitude and phase of the harmonic correspond to the power and axis of a Jackson cross-cylinder (JCC) lens, respectively. Expressing the Fourier series in rectangular form leads to the representation of an arbitrary sphero-cylinder lens as the sum of a spherical lens and two cross-cylinders, one at axis 0 degree and the other at axis 45 degrees. The power of these three component lenses may be interpreted as (x,y,z) coordinates of a vector representation of the power profile. Advantages of this power vector representation of a sphero-cylinder lens for numerical and graphical analysis of optometric data are described for problems involving lens combinations, comparison of different lenses, and the statistical distribution of refractive errors.",
"title": ""
},
{
"docid": "2b8311fa53968e7d7b6db90d81c35d4e",
"text": "Maintaining healthy blood glucose concentration levels is advantageous for the prevention of diabetes and obesity. Present day technologies limit such monitoring to patients who already have diabetes. The purpose of this project is to suggest a non-invasive method for measuring blood glucose concentration levels. Such a method would provide useful for even people without illness, addressing preventive care. This project implements near-infrared light of wavelengths 1450nm and 2050nm through the use of light emitting diodes and measures transmittance through solutions of distilled water and d-glucose of concentrations 50mg/dL, 100mg/dL, 150mg/dL, and 200mg/dL by using an InGaAs photodiode. Regression analysis is done. Transmittance results were observed when using near-infrared light of wavelength 1450nm. As glucose concentration increases, output voltage from the photodiode also increases. The relation observed was linear. No significant transmittance results were obtained with the use of 2050nm infrared light due to high absorbance and low power. The use of 1450nm infrared light provides a means of measuring glucose concentration levels.",
"title": ""
},
{
"docid": "5bb390a0c9e95e0691ac4ba07b5eeb9d",
"text": "Clearing the clouds away from the true potential and obstacles posed by this computing capability.",
"title": ""
},
{
"docid": "4142b1fc9e37ffadc6950105c3d99749",
"text": "Just-noticeable distortion (JND), which refers to the maximum distortion that the human visual system (HVS) cannot perceive, plays an important role in perceptual image and video processing. In comparison with JND estimation for images, estimation of the JND profile for video needs to take into account the temporal HVS properties in addition to the spatial properties. In this paper, we develop a spatio-temporal model estimating JND in the discrete cosine transform domain. The proposed model incorporates the spatio-temporal contrast sensitivity function, the influence of eye movements, luminance adaptation, and contrast masking to be more consistent with human perception. It is capable of yielding JNDs for both still images and video with significant motion. The experiments conducted in this study have demonstrated that the JND values estimated for video sequences with moving objects by the model are in line with the HVS perception. The accurate JND estimation of the video towards the actual visibility bounds can be translated into resource savings (e.g., for bandwidth/storage or computation) and performance improvement in video coding and other visual processing tasks (such as perceptual quality evaluation, visual signal restoration/enhancement, watermarking, authentication, and error protection)",
"title": ""
},
{
"docid": "1b1dc71cd5ae84c2ae27a1c36f638073",
"text": "Despite a prevalent industry perception to the contrary, the agile practices of Test-Driven Development and Continuous Integration can be successfully applied to embedded software. We present here a holistic set of practices, platform independent tools, and a new design pattern (Model Conductor Hardware MCH) that together produce: good design from tests programmed first, logic decoupled from hardware, and systems testable under automation. Ultimately, this approach yields an order of magnitude or more reduction in software flaws, predictable progress, and measurable velocity for data-driven project management. We use the approach discussed herein for real-world production systems and have included a full C-based sample project (using an Atmel AT91SAM7X ARM7) to illustrate it. This example demonstrates transforming requirements into test code, system, integration, and unit tests driving development, daily “micro design” fleshing out a system’s architecture, the use of the MCH itself, and the use of mock functions in tests.",
"title": ""
},
{
"docid": "22951590c72e3f7a7c913ab8956dc06a",
"text": "In the precursor paper, a many-objective optimization method (NSGA-III), based on the NSGA-II framework, was suggested and applied to a number of unconstrained test and practical problems with box constraints alone. In this paper, we extend NSGA-III to solve generic constrained many-objective optimization problems. In the process, we also suggest three types of constrained test problems that are scalable to any number of objectives and provide different types of challenges to a many-objective optimizer. A previously suggested MOEA/D algorithm is also extended to solve constrained problems. Results using constrained NSGA-III and constrained MOEA/D show an edge of the former, particularly in solving problems with a large number of objectives. Furthermore, the NSGA-III algorithm is made adaptive in updating and including new reference points on the fly. The resulting adaptive NSGA-III is shown to provide a denser representation of the Pareto-optimal front, compared to the original NSGA-III with an identical computational effort. This, and the original NSGA-III paper, together suggest and amply test a viable evolutionary many-objective optimization algorithm for handling constrained and unconstrained problems. These studies should encourage researchers to use and pay further attention in evolutionary many-objective optimization.",
"title": ""
}
] |
scidocsrr
|
7790af0a9eff3fe9c19cf8bcd0395fef
|
On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty
|
[
{
"docid": "7b46cf9aa63423485f4f48d635cb8f5c",
"text": "It sounds good when knowing the multiple criteria decision analysis an integrated approach in this website. This is one of the books that many people looking for. In the past, many people ask about this book as their favourite book to read and collect. And now, we present hat you need quickly. It seems to be so happy to offer you this famous book. It will not become a unity of the way for you to get amazing benefits at all. But, it will serve something that will let you get the best time and moment to spend for reading the book.",
"title": ""
}
] |
[
{
"docid": "d03f900c785a5d6abf8bb16434693e4d",
"text": "Juvenile gigantomastia is a benign disorder of the breast in which one or both of the breasts undergo a massive increase in size during adolescence. The authors present a series of four cases of juvenile gigantomastia, advances in endocrine management, and the results of surgical therapy. Three patients were treated for initial management of juvenile gigantomastia and one patient was evaluated for a gestationally induced recurrence of juvenile gigantomastia. The three women who presented for initial management had a complete evaluation to rule out other etiologies of breast enlargement. Endocrine therapy was used in 2 patients, one successfully. A 17-year-old girl had unilateral hypertrophy treated with reduction surgery. She had no recurrence and did not require additional surgery. Two patients, ages 10 and 12 years, were treated at a young age with reduction mammaplasty, and both of these girls required secondary surgery for treatment. One patient underwent subtotal mastectomy with implant reconstruction but required two subsequent operations for removal of recurrent hypertrophic breast tissue. The second patient started a course of tamoxifen followed by reduction surgery. While on tamoxifen, the second postoperative result remained stable, and the contralateral breast, which had exhibited some minor hypertrophy, regressed in size. The fourth patient was a gravid 24-year-old who had been treated for juvenile gigantomastia at age 14, and presented with gestationally induced recurrent hypertrophy. The authors' experience has been that juvenile gigantomastia in young patients is prone to recurrence, and is in agreement with previous studies that subcutaneous mastectomy provides definitive treatment. However, tamoxifen may be a useful adjunct and may allow stable results when combined with reduction mammaplasty. If successful, the use of tamoxifen would eliminate the potential complications of breast prostheses. Lastly, the 17-year-old patient did not require secondary surgery, suggesting that older patients may be treated definitively with reduction surgery alone.",
"title": ""
},
{
"docid": "7ef20dc3eb5ec7aee75f41174c9fae12",
"text": "As the data and ontology layers of the Semantic Web stack have achieved a certain level of maturity in standard recommendations such as RDF and OWL, the current focus lies on two related aspects. On the one hand, the definition of a suitable query language for RDF, SPARQL, is close to recommendation status within the W3C. The establishment of the rules layer on top of the existing stack on the other hand marks the next step to be taken, where languages with their roots in Logic Programming and Deductive Databases are receiving considerable attention. The purpose of this paper is threefold. First, we discuss the formal semantics of SPARQLextending recent results in several ways. Second, weprovide translations from SPARQL to Datalog with negation as failure. Third, we propose some useful and easy to implement extensions of SPARQL, based on this translation. As it turns out, the combination serves for direct implementations of SPARQL on top of existing rules engines as well as a basis for more general rules and query languages on top of RDF.",
"title": ""
},
{
"docid": "ad1000d0975bb0c605047349267c5e47",
"text": "A systematic review of randomized clinical trials was conducted to evaluate the acceptability and usefulness of computerized patient education interventions. The Columbia Registry, MEDLINE, Health, BIOSIS, and CINAHL bibliographic databases were searched. Selection was based on the following criteria: (1) randomized controlled clinical trials, (2) educational patient-computer interaction, and (3) effect measured on the process or outcome of care. Twenty-two studies met the selection criteria. Of these, 13 (59%) used instructional programs for educational intervention. Five studies (22.7%) tested information support networks, and four (18%) evaluated systems for health assessment and history-taking. The most frequently targeted clinical application area was diabetes mellitus (n = 7). All studies, except one on the treatment of alcoholism, reported positive results for interactive educational intervention. All diabetes education studies, in particular, reported decreased blood glucose levels among patients exposed to this intervention. Computerized educational interventions can lead to improved health status in several major areas of care, and appear not to be a substitute for, but a valuable supplement to, face-to-face time with physicians.",
"title": ""
},
{
"docid": "4261e44dad03e8db3c0520126b9c7c4d",
"text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.",
"title": ""
},
{
"docid": "c34b6fac632c05c73daee2f0abce3ae8",
"text": "OBJECTIVES\nUnilateral strength training produces an increase in strength of the contralateral homologous muscle group. This process of strength transfer, known as cross education, is generally attributed to neural adaptations. It has been suggested that unilateral strength training of the free limb may assist in maintaining the functional capacity of an immobilised limb via cross education of strength, potentially enhancing recovery outcomes following injury. Therefore, the purpose of this review is to examine the impact of immobilisation, the mechanisms that may contribute to cross education, and possible implications for the application of unilateral training to maintain strength during immobilisation.\n\n\nDESIGN\nCritical review of literature.\n\n\nMETHODS\nSearch of online databases.\n\n\nRESULTS\nImmobilisation is well known for its detrimental effects on muscular function. Early reductions in strength outweigh atrophy, suggesting a neural contribution to strength loss, however direct evidence for the role of the central nervous system in this process is limited. Similarly, the precise neural mechanisms responsible for cross education strength transfer remain somewhat unknown. Two recent studies demonstrated that unilateral training of the free limb successfully maintained strength in the contralateral immobilised limb, although the role of the nervous system in this process was not quantified.\n\n\nCONCLUSIONS\nCross education provides a unique opportunity for enhancing rehabilitation following injury. By gaining an understanding of the neural adaptations occurring during immobilisation and cross education, future research can utilise the application of unilateral training in clinical musculoskeletal injury rehabilitation.",
"title": ""
},
{
"docid": "19361b2d5e096f26e650b25b745e5483",
"text": "Multispectral pedestrian detection has attracted increasing attention from the research community due to its crucial competence for many around-the-clock applications (e.g., video surveillance and autonomous driving), especially under insufficient illumination conditions. We create a human baseline over the KAIST dataset and reveal that there is still a large gap between current top detectors and human performance. To narrow this gap, we propose a network fusion architecture, which consists of a multispectral proposal network to generate pedestrian proposals, and a subsequent multispectral classification network to distinguish pedestrian instances from hard negatives. The unified network is learned by jointly optimizing pedestrian detection and semantic segmentation tasks. The final detections are obtained by integrating the outputs from different modalities as well as the two stages. The approach significantly outperforms state-of-the-art methods on the KAIST dataset while remain fast. Additionally, we contribute a sanitized version of training annotations for the KAIST dataset, and examine the effects caused by different kinds of annotation errors. Future research of this problem will benefit from the sanitized version which eliminates the interference of annotation errors.",
"title": ""
},
{
"docid": "106ec8b5c3f5bff145be2bbadeeafe68",
"text": "Objective: To provide a parsimonious clustering pipeline that provides comparable performance to deep learning-based clustering methods, but without using deep learning algorithms, such as autoencoders. Materials and methods: Clustering was performed on six benchmark datasets, consisting of five image datasets used in object, face, digit recognition tasks (COIL20, COIL100, CMU-PIE, USPS, and MNIST) and one text document dataset (REUTERS-10K) used in topic recognition. K-means, spectral clustering, Graph Regularized Non-negative Matrix Factorization, and K-means with principal components analysis algorithms were used for clustering. For each clustering algorithm, blind source separation (BSS) using Independent Component Analysis (ICA) was applied. Unsupervised feature learning (UFL) using reconstruction cost ICA (RICA) and sparse filtering (SFT) was also performed for feature extraction prior to the cluster algorithms. Clustering performance was assessed using the normalized mutual information and unsupervised clustering accuracy metrics. Results: Performing, ICA BSS after the initial matrix factorization step provided the maximum clustering performance in four out of six datasets (COIL100, CMU-PIE, MNIST, and REUTERS-10K). Applying UFL as an initial processing component helped to provide the maximum performance in three out of six datasets (USPS, COIL20, and COIL100). Compared to state-of-the-art non-deep learning clustering methods, ICA BSS and/ or UFL with graph-based clustering algorithms outperformed all other methods. With respect to deep learning-based clustering algorithms, the new methodology presented here obtained the following rankings: COIL20, 2nd out of 5; COIL100, 2nd out of 5; CMU-PIE, 2nd out of 5; USPS, 3rd out of 9; MNIST, 8th out of 15; and REUTERS-10K, 4th out of 5. Discussion: By using only ICA BSS and UFL using RICA and SFT, clustering accuracy that is better or on par with many deep learning-based clustering algorithms was achieved. For instance, by applying ICA BSS to spectral clustering on the MNIST dataset, we obtained an accuracy of 0.882. This is better than the well-known Deep Embedded Clustering algorithm that had obtained an accuracy of 0.818 using stacked denoising autoencoders in its model. Open Access © The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. RESEARCH Gultepe and Makrehchi Hum. Cent. Comput. Inf. Sci. (2018) 8:25 https://doi.org/10.1186/s13673-018-0148-3 *Correspondence: [email protected] Department of Electrical and Computer Engineering, University of Ontario Institute of Technology, 2000 Simcoe St N, Oshawa, ON L1H 7K4, Canada Page 2 of 19 Gultepe and Makrehchi Hum. Cent. Comput. Inf. Sci. (2018) 8:25 Conclusion: Using the new clustering pipeline presented here, effective clustering performance can be obtained without employing deep clustering algorithms and their accompanying hyper-parameter tuning procedure.",
"title": ""
},
{
"docid": "1ae3bacfff3bffad223eb6cad7250fc3",
"text": "The effects of a human head on the performance of small planar ultra-wideband (UWB) antennas in proximity of the head are investigated numerically and experimentally. In simulation, a numerical head model is used in the XFDTD software package. The head model developed by REMCOM is with the frequency-dependent dielectric constant and conductivity obtained from the average data of anatomical human heads. Two types of planar antennas printed on printed circuit board (PCB) are designed to cover the UWB band. The impedance and radiation performance of the antennas are examined when the antennas are placed very close to the human head. The study shows that the human head slightly affects the impedance performance of the antennas. The radiated field distributions and the gain of the antennas demonstrate that the human head significantly blocks and absorbs the radiation from the antennas so that the radiation patterns are directional in the horizontal planes and the average gain greatly decreases. The information derived from the study is helpful to engineers who are applying UWB devices around/on human heads.",
"title": ""
},
{
"docid": "e8758a9e2b139708ca472dd60397dc2e",
"text": "Multiple photovoltaic (PV) modules feeding a common load is the most common form of power distribution used in solar PV systems. In such systems, providing individual maximum power point tracking (MPPT) schemes for each of the PV modules increases the cost. Furthermore, its v-i characteristic exhibits multiple local maximum power points (MPPs) during partial shading, making it difficult to find the global MPP using conventional single-stage (CSS) tracking. To overcome this difficulty, the authors propose a novel MPPT algorithm by introducing a particle swarm optimization (PSO) technique. The proposed algorithm uses only one pair of sensors to control multiple PV arrays, thereby resulting in lower cost, higher overall efficiency, and simplicity with respect to its implementation. The validity of the proposed algorithm is demonstrated through experimental studies. In addition, a detailed performance comparison with conventional fixed voltage, hill climbing, and Fibonacci search MPPT schemes are presented. Algorithm robustness was verified for several complicated partial shading conditions, and in all cases this method took about 2 s to find the global MPP.",
"title": ""
},
{
"docid": "0af8cffabf74b5955e1a7bb6edf48cdf",
"text": "One of the main challenges in game AI is building agents that can intelligently react to unforeseen game situations. In real-time strategy games, players create new strategies and tactics that were not anticipated during development. In order to build agents capable of adapting to these types of events, we advocate the development of agents that reason about their goals in response to unanticipated game events. This results in a decoupling between the goal selection and goal execution logic in an agent. We present a reactive planning implementation of the Goal-Driven Autonomy conceptual model and demonstrate its application in StarCraft. Our system achieves a win rate of 73% against the builtin AI and outranks 48% of human players on a competitive ladder server.",
"title": ""
},
{
"docid": "f2fc46012fa4b767f514b9d145227ec7",
"text": "Derivation of backpropagation in convolutional neural network (CNN) is conducted based on an example with two convolutional layers. The step-by-step derivation is helpful for beginners. First, the feedforward procedure is claimed, and then the backpropagation is derived based on the example. 1 Feedforward",
"title": ""
},
{
"docid": "a712b6efb5c869619864cd817c2e27e1",
"text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.",
"title": ""
},
{
"docid": "6264a8e43070f686375150b4beadaee7",
"text": "A control law for an integrated power/attitude control system (IPACS) for a satellite is presented. Four or more energy/momentum wheels in an arbitrary noncoplanar con guration and a set of three thrusters are used to implement the torque inputs. The energy/momentum wheels are used as attitude-control actuators, as well as an energy storage mechanism, providing power to the spacecraft. In that respect, they can replace the currently used heavy chemical batteries. The thrusters are used to implement the torques for large and fast (slew) maneuvers during the attitude-initialization and target-acquisition phases and to implement the momentum management strategies. The energy/momentum wheels are used to provide the reference-tracking torques and the torques for spinning up or down the wheels for storing or releasing kinetic energy. The controller published in a previous work by the authors is adopted here for the attitude-tracking function of the wheels. Power tracking for charging and discharging the wheels is added to complete the IPACS framework. The torques applied by the energy/momentum wheels are decomposed into two spaces that are orthogonal to each other, with the attitude-control torques and power-tracking torques in each space. This control law can be easily incorporated in an IPACS system onboard a satellite. The possibility of the occurrence of singularities, in which no arbitrary energy pro le can be tracked, is studied for a generic wheel cluster con guration. A standard momentum management scheme is considered to null the total angular momentum of the wheels so as to minimize the gyroscopic effects and prevent the singularity from occurring. A numerical example for a satellite in a low Earth near-polar orbit is provided to test the proposed IPACS algorithm. The satellite’s boresight axis is required to track a ground station, and the satellite is required to rotate about its boresight axis so that the solar panel axis is perpendicular to the satellite–sun vector.",
"title": ""
},
{
"docid": "0e153353fb8af1511de07c839f6eaca5",
"text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.",
"title": ""
},
{
"docid": "18c517f26bceeb7930a4418f7a6b2f30",
"text": "BACKGROUND\nWe aimed to study whether pulmonary hypertension (PH) and elevated pulmonary vascular resistance (PVR) could be predicted by conventional echo Doppler and novel tissue Doppler imaging (TDI) in a population of chronic obstructive pulmonary disease (COPD) free of LV disease and co-morbidities.\n\n\nMETHODS\nEchocardiography and right heart catheterization was performed in 100 outpatients with COPD. By echocardiography the time-integral of the TDI index, right ventricular systolic velocity (RVSmVTI) and pulmonary acceleration-time (PAAcT) were measured and adjusted for heart rate. The COPD patients were randomly divided in a derivation (n = 50) and a validation cohort (n = 50).\n\n\nRESULTS\nPH (mean pulmonary artery pressure (mPAP) ≥ 25mmHg) and elevated PVR ≥ 2Wood unit (WU) were predicted by satisfactory area under the curve for RVSmVTI of 0.93 and 0.93 and for PAAcT of 0.96 and 0.96, respectively. Both echo indices were 100% feasible, contrasting 84% feasibility for parameters relying on contrast enhanced tricuspid-regurgitation. RVSmVTI and PAAcT showed best correlations to invasive measured mPAP, but less so to PVR. PAAcT was accurate in 90- and 78% and RVSmVTI in 90- and 84% in the calculation of mPAP and PVR, respectively.\n\n\nCONCLUSIONS\nHeart rate adjusted-PAAcT and RVSmVTI are simple and reproducible methods that correlate well with pulmonary artery pressure and PVR and showed high accuracy in detecting PH and increased PVR in patients with COPD. Taken into account the high feasibility of these two echo indices, they should be considered in the echocardiographic assessment of COPD patients.",
"title": ""
},
{
"docid": "0141a93f93a7cf3c8ee8fd705b0a9657",
"text": "We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT’14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.",
"title": ""
},
{
"docid": "459a3bc8f54b8f7ece09d5800af7c37b",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact [email protected]. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.",
"title": ""
},
{
"docid": "cdaa99f010b20906fee87d8de08e1106",
"text": "We propose a novel hierarchical clustering algorithm for data-sets in which only pairwise distances between the points are provided. The classical Hungarian method is an efficient algorithm for solving the problem of minimal-weight cycle cover. We utilize the Hungarian method as the basic building block of our clustering algorithm. The disjoint cycles, produced by the Hungarian method, are viewed as a partition of the data-set. The clustering algorithm is formed by hierarchical merging. The proposed algorithm can handle data that is arranged in non-convex sets. The number of the clusters is automatically found as part of the clustering process. We report an improved performance of our algorithm in a variety of examples and compare it to the spectral clustering algorithm.",
"title": ""
},
{
"docid": "e938ad7500cecd5458e4f68e564e6bc4",
"text": "In this article, an adaptive fuzzy sliding mode control (AFSMC) scheme is derived for robotic systems. In the AFSMC design, the sliding mode control (SMC) concept is combined with fuzzy control strategy to obtain a model-free fuzzy sliding mode control. The equivalent controller has been replaced by a fuzzy system and the uncertainties are estimated online. The approach of the AFSMC has the learning ability to generate the fuzzy control actions and adaptively compensates for the uncertainties. Despite the high nonlinearity and coupling effects, the control input of the proposed control algorithm has been decoupled leading to a simplified control mechanism for robotic systems. Simulations have been carried out on a two link planar robot. Results show the effectiveness of the proposed control system.",
"title": ""
}
] |
scidocsrr
|
d9f71acd36247ac5f2ce09592a3fc642
|
A Survey of Communication Sub-systems for Intersatellite Linked Systems and CubeSat Missions
|
[
{
"docid": "60f6e3345aae1f91acb187ba698f073b",
"text": "A Cube-Satellite (CubeSat) is a small satellite weighing no more than one kilogram. CubeSats are used for space research, but their low-rate communication capability limits functionality. As greater payload and instrumentation functions are sought, increased data rate is needed. Since most CubeSats currently transmit at a 437 MHz frequency, several directional antenna types were studied for a 2.45 GHz, larger bandwidth transmission. This higher frequency provides the bandwidth needed for increasing the data rate. A deployable antenna mechanism maybe needed because most directional antennas are bigger than the CubeSat size constraints. From the study, a deployable hemispherical helical antenna prototype was built. Transmission between two prototype antenna equipped transceivers at varying distances tested the helical performance. When comparing the prototype antenna's maximum transmission distance to the other commercial antennas, the prototype outperformed all commercial antennas, except the patch antenna. The root cause was due to the helical antenna's narrow beam width. Future work can be done in attaining a more accurate alignment with the satellite's directional antenna to downlink with a terrestrial ground station.",
"title": ""
}
] |
[
{
"docid": "a22bc61f0fa5733a1835f61056810422",
"text": "Humans are able to accelerate their learning by selecting training materials that are the most informative and at the appropriate level of difficulty. We propose a framework for distributing deep learning in which one set of workers search for the most informative examples in parallel while a single worker updates the model on examples selected by importance sampling. This leads the model to update using an unbiased estimate of the gradient which also has minimum variance when the sampling proposal is proportional to the L2-norm of the gradient. We show experimentally that this method reduces gradient variance even in a context where the cost of synchronization across machines cannot be ignored, and where the factors for importance sampling are not updated instantly across the training set.",
"title": ""
},
{
"docid": "7120cc5882438207ae432eb556d65e72",
"text": "A radar system with an ultra-wide FMCW ramp bandwidth of 25.6 GHz (≈32%) around a center frequency of 80 GHz is presented. The system is based on a monostatic fully integrated SiGe transceiver chip, which is stabilized using conventional fractional-N PLL chips at a reference frequency of 100 MHz. The achieved in-loop phase noise is ≈ -88 dBc/Hz (10 kHz offset frequency) for the center frequency and below ≈-80 dBc/Hz in the wide frequency band of 25.6 GHz for all offset frequencies >;1 kHz. The ultra-wide PLL-stabilization was achieved using a reverse frequency position mixer in the PLL (offset-PLL) resulting in a compensation of the variation of the oscillators tuning sensitivity with the variation of the N-divider in the PLL. The output power of the transceiver chip, as well as of the mm-wave module (containing a waveguide transition), is sufficiently flat versus the output frequency (variation <;3 dB). In radar measurements using the full bandwidth an ultra-high spatial resolution of 7.12 mm was achieved. The standard deviation between repeated measurements of the same target is 0.36 μm.",
"title": ""
},
{
"docid": "704cad33eed2b81125f856c4efbff4fa",
"text": "In order to realize missile real-time change flight trajectory, three-loop autopilot is setting up. The structure characteristics, autopilot model, and control parameters design method were researched. Firstly, this paper introduced the 11th order three-loop autopilot model. With the principle of systems reduce model order, the 5th order model was deduced. On that basis, open-loop frequency characteristic and closed-loop frequency characteristic were analyzed. The variables of velocity ratio, dynamic pressure ratio and elevator efficiency ratio were leading to correct system nonlinear. And then autopilot gains design method were induced. System flight simulations were done, and result shows that autopilot gains played a good job in the flight trajectory, autopilot satisfied the flight index.",
"title": ""
},
{
"docid": "8583f3735314a7d38bcb82f6acf781ce",
"text": "Safety critical systems involve the tight coupling between potentially conflicting control objectives and safety constraints. As a means of creating a formal framework for controlling systems of this form, and with a view toward automotive applications, this paper develops a methodology that allows safety conditions—expressed as control barrier functions— to be unified with performance objectives—expressed as control Lyapunov functions—in the context of real-time optimizationbased controllers. Safety conditions are specified in terms of forward invariance of a set, and are verified via two novel generalizations of barrier functions; in each case, the existence of a barrier function satisfying Lyapunov-like conditions implies forward invariance of the set, and the relationship between these two classes of barrier functions is characterized. In addition, each of these formulations yields a notion of control barrier function (CBF), providing inequality constraints in the control input that, when satisfied, again imply forward invariance of the set. Through these constructions, CBFs can naturally be unified with control Lyapunov functions (CLFs) in the context of a quadratic program (QP); this allows for the achievement of control objectives (represented by CLFs) subject to conditions on the admissible states of the system (represented by CBFs). The mediation of safety and performance through a QP is demonstrated on adaptive cruise control and lane keeping, two automotive control problems that present both safety and performance considerations coupled with actuator bounds.",
"title": ""
},
{
"docid": "07cd406cead1a086f61f363269de1aac",
"text": "Tolerating and recovering from link and switch failures are fundamental requirements of most networks, including Software-Defined Networks (SDNs). However, instead of traditional behaviors such as network-wide routing re-convergence, failure recovery in an SDN is determined by the specific software logic running at the controller. While this admits more freedom to respond to a failure event, it ultimately means that each controller application must include its own recovery logic, which makes the code more difficult to write and potentially more error-prone.\n In this paper, we propose a runtime system that automates failure recovery and enables network developers to write simpler, failure-agnostic code. To this end, upon detecting a failure, our approach first spawns a new controller instance that runs in an emulated environment consisting of the network topology excluding the failed elements. Then, it quickly replays inputs observed by the controller before the failure occurred, leading the emulated network into the forwarding state that accounts for the failed elements. Finally, it recovers the network by installing the difference ruleset between emulated and current forwarding states.",
"title": ""
},
{
"docid": "41611aef9542367f80d8898b1f71bead",
"text": "The economy-wide implications of sea level rise in 2050 are estimated using a static computable general equilibrium model. Overall, general equilibrium effects increase the costs of sea level rise, but not necessarily in every sector or region. In the absence of coastal protection, economies that rely most on agriculture are hit hardest. Although energy is substituted for land, overall energy consumption falls with the shrinking economy, hurting energy exporters. With full coastal protection, GDP increases, particularly in regions that do a lot of dike building, but utility falls, least in regions that build a lot of dikes and export energy. Energy prices rise and energy consumption falls. The costs of full protection exceed the costs of losing land.",
"title": ""
},
{
"docid": "816b2ed7d4b8ce3a8fc54e020bc2f712",
"text": "As a standardized communication protocol, OPC UA is the main focal point with regard to information exchange in the ongoing initiative Industrie 4.0. But there are also considerations to use it within the Internet of Things. The fact that currently no open reference implementation can be used in research for free represents a major problem in this context. The authors have the opinion that open source software can stabilize the ongoing theoretical work. Recent efforts to develop an open implementation for OPC UA were not able to meet the requirements of practical and industrial automation technology. This issue is addressed by the open62541 project which is presented in this article including an overview of its application fields and main research issues.",
"title": ""
},
{
"docid": "6f9be23e33910d44551b5befa219e557",
"text": "The Lecture Notes are used for the a short course on the theory and applications of the lattice Boltzmann methods for computational uid dynamics taugh by the author at Institut f ur Computeranwendungen im Bauingenieurwesen (CAB), Technischen Universitat Braunschweig, during August 7 { 12, 2003. The lectures cover the basic theory of the lattice Boltzmann equation and its applications to hydrodynamics. Lecture One brie y reviews the history of the lattice gas automata and the lattice Boltzmann equation and their connections. Lecture Two provides an a priori derivation of the lattice Boltzmann equation, which connects the lattice Boltzmann equation to the continuous Boltzmann equation and demonstrates that the lattice Boltzmann equation is indeed a special nite di erence form of the Boltzmann equation. Lecture Two also includes the derivation of the lattice Boltzmann model for nonideal gases from the Enskog equation for dense gases. Lecture Three studies the generalized lattice Boltzmann equation with multiple relaxation times. A summary is provided at the end of each Lecture. Lecture Four discusses the uid-solid boundary conditions in the lattice Boltzmann methods. Applications of the lattice Boltzmann mehod to particulate suspensions, turbulence ows, and other ows are also shown. An Epilogue on the rationale of the lattice Boltzmann method is given. Some key references in the literature is also provided.",
"title": ""
},
{
"docid": "0851caf6599f97bbeaf68b57e49b4da5",
"text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "9dc9b5bad3422a6f1c7f33ccb25fdead",
"text": "We present a named entity recognition (NER) system for extracting product attributes and values from listing titles. Information extraction from short listing titles present a unique challenge, with the lack of informative context and grammatical structure. In this work, we combine supervised NER with bootstrapping to expand the seed list, and output normalized results. Focusing on listings from eBay’s clothing and shoes categories, our bootstrapped NER system is able to identify new brands corresponding to spelling variants and typographical errors of the known brands, as well as identifying novel brands. Among the top 300 new brands predicted, our system achieves 90.33% precision. To output normalized attribute values, we explore several string comparison algorithms and found n-gram substring matching to work well in practice.",
"title": ""
},
{
"docid": "5c9ea5fcfef7bac1513a79fd918d3194",
"text": "Elderly suffers from injuries or disabilities through falls every year. With a high likelihood of falls causing serious injury or death, falling can be extremely dangerous, especially when the victim is home-alone and is unable to seek timely medical assistance. Our fall detection systems aims to solve this problem by automatically detecting falls and notify healthcare services or the victim’s caregivers so as to provide help. In this paper, development of a fall detection system based on Kinect sensor is introduced. Current fall detection algorithms were surveyed and we developed a novel posture recognition algorithm to improve the specificity of the system. Data obtained through trial testing with human subjects showed a 26.5% increase in fall detection compared to control algorithms. With our novel detection algorithm, the system conducted in a simulated ward scenario can achieve up to 90% fall detection rate.",
"title": ""
},
{
"docid": "47398ca11079b699e050f10e292855ac",
"text": "It is well known that 3DIC integration is the next generation semiconductor technology with the advantages of small form factor, high performance and low power consumption. However the device TSV process and design rules are not mature. Assembly the chips on top of the Si interposer is the current most desirable method to achieve the requirement of good performance. In this study, a new packaging concept, the Embedded Interposer Carrier (EIC) technology was developed. It aims to solve some of the problems facing current interposer assemble issues. It eliminates the joining process of silicon interposer to the laminate carrier substrate. The concept of EIC is to embed one or multiple interposer chips into the build-up dielectric layers in the laminated substrate. The process development of EIC structure is investigated in this paper. EIC technology not only can shrink an electronic package and system size but also provide a better electronic performance for high-bandwidth applications. EIC technology can be one of the potential solutions for 3D System-in-Package.",
"title": ""
},
{
"docid": "1c1a677e4e95ee6a7656db9683a19c9b",
"text": "With the rapid development of the Intelligent Transportation System (ITS), vehicular communication networks have been widely studied in recent years. Dedicated Short Range Communication (DSRC) can provide efficient real-time information exchange among vehicles without the need of pervasive roadside communication infrastructure. Although mobile cellular networks are capable of providing wide coverage for vehicular users, the requirements of services that require stringent real-time safety cannot always be guaranteed by cellular networks. Therefore, the Heterogeneous Vehicular NETwork (HetVNET), which integrates cellular networks with DSRC, is a potential solution for meeting the communication requirements of the ITS. Although there are a plethora of reported studies on either DSRC or cellular networks, joint research of these two areas is still at its infancy. This paper provides a comprehensive survey on recent wireless networks techniques applied to HetVNETs. Firstly, the requirements and use cases of safety and non-safety services are summarized and compared. Consequently, a HetVNET framework that utilizes a variety of wireless networking techniques is presented, followed by the descriptions of various applications for some typical scenarios. Building such HetVNETs requires a deep understanding of heterogeneity and its associated challenges. Thus, major challenges and solutions that are related to both the Medium Access Control (MAC) and network layers in HetVNETs are studied and discussed in detail. Finally, we outline open issues that help to identify new research directions in HetVNETs.",
"title": ""
},
{
"docid": "29fc090c5d1e325fd28e6bbcb690fb8d",
"text": "Many forensic computing practitioners work in a high workload and low resource environment. With the move by the discipline to seek ISO 17025 laboratory accreditation, practitioners are finding it difficult to meet the demands of validation and verification of their tools and still meet the demands of the accreditation framework. Many agencies are ill-equipped to reproduce tests conducted by organizations such as NIST since they cannot verify the results with their equipment and in many cases rely solely on an independent validation study of other peoples' equipment. This creates the issue of tools in reality never being tested. Studies have shown that independent validation and verification of complex forensic tools is expensive and time consuming, and many practitioners also use tools that were not originally designed for forensic purposes. This paper explores the issues of validation and verification in the accreditation environment and proposes a paradigm that will reduce the time and expense required to validate and verify forensic software tools",
"title": ""
},
{
"docid": "d537214f407128585d6a4e6bab55a45b",
"text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.",
"title": ""
},
{
"docid": "8f0d90a605829209c7b6d777c11b299d",
"text": "Researchers and educators have designed curricula and resources for introductory programming environments such as Scratch, App Inventor, and Kodu to foster computational thinking in K-12. This paper is an empirical study of the effectiveness and usefulness of tiles and flashcards developed for Microsoft Kodu Game Lab to support students in learning how to program and develop games. In particular, we investigated the impact of physical manipulatives on 3rd -- 5th grade students' ability to understand, recognize, construct, and use game programming design patterns. We found that the students who used physical manipulatives performed well in rule construction, whereas the students who engaged more with the rule editor of the programming environment had better mental simulation of the rules and understanding of the concepts.",
"title": ""
},
{
"docid": "a0589d0c1df89328685bdabd94a1a8a2",
"text": "We present a translation of §§160–166 of Dedekind’s Supplement XI to Dirichlet’s Vorlesungen über Zahlentheorie, which contain an investigation of the subfields of C. In particular, Dedekind explores the lattice structure of these subfields, by studying isomorphisms between them. He also indicates how his ideas apply to Galois theory. After a brief introduction, we summarize the translated excerpt, emphasizing its Galois-theoretic highlights. We then take issue with Kiernan’s characterization of Dedekind’s work in his extensive survey article on the history of Galois theory; Dedekind has a nearly complete realization of the modern “fundamental theorem of Galois theory” (for subfields of C), in stark contrast to the picture presented by Kiernan at points. We intend a sequel to this article of an historical and philosophical nature. With that in mind, we have sought to make Dedekind’s text accessible to as wide an audience as possible. Thus we include a fair amount of background and exposition.",
"title": ""
},
{
"docid": "8b0a09cbac4b1cbf027579ece3dea9ef",
"text": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
}
] |
scidocsrr
|
14d7a9fee13fc480e342a9a54ff08cc0
|
Accurately detecting trolls in Slashdot Zoo via decluttering
|
[
{
"docid": "a178871cd82edaa05a0b0befacb7fc38",
"text": "The main applications and challenges of one of the hottest research areas in computer science.",
"title": ""
},
{
"docid": "8a8b33eabebb6d53d74ae97f8081bf7b",
"text": "Social networks are inevitable part of modern life. A class of social networks is those with both positive (friendship or trust) and negative (enmity or distrust) links. Ranking nodes in signed networks remains a hot topic in computer science. In this manuscript, we review different ranking algorithms to rank the nodes in signed networks, and apply them to the sign prediction problem. Ranking scores are used to obtain reputation and optimism, which are used as features in the sign prediction problem. Reputation of a node shows patterns of voting towards the node and its optimism demonstrates how optimistic a node thinks about others. To assess the performance of different ranking algorithms, we apply them on three signed networks including Epinions, Slashdot and Wikipedia. In this paper, we introduce three novel ranking algorithms for signed networks and compare their ability in predicting signs of edges with already existing ones. We use logistic regression as the predictor and the reputation and optimism values for the trustee and trustor as features (that are obtained based on different ranking algorithms). We find that ranking algorithms resulting in correlated ranking scores, leads to almost the same prediction accuracy. Furthermore, our analysis identifies a number of ranking algorithms that result in higher prediction accuracy compared to others.",
"title": ""
},
{
"docid": "34c343413fc748c1fc5e07fb40e3e97d",
"text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.",
"title": ""
}
] |
[
{
"docid": "ec5e3b472973e3f77812976b1dd300a5",
"text": "In this thesis we investigate different methods of automating behavioral analysis in animal videos using shapeand motion-based models, with a focus on classifying large datasets of rodent footage. In order to leverage the recent advances in deep learning techniques a massive number of training samples is required, which has lead to the development of a data transfer pipeline to gather footage from multiple video sources and a custom-built web-based video annotation tool to create annotation datasets. Finally we develop and compare new deep convolutional and recurrent-convolutional neural network architectures that outperform existing systems.",
"title": ""
},
{
"docid": "e89a1c0fb1b0736b238373f2fbca91a0",
"text": "In this paper, we provide a comprehensive study of elliptic curve cryptography (ECC) for wireless sensor networks (WSN) security provisioning, mainly for key management and authentication modules. On the other hand, we present and evaluate a side-channel attacks (SCAs) experimental bench solution for energy evaluation, especially simple power analysis (SPA) attacks experimental bench to measure dynamic power consumption of ECC operations. The goal is the best use of the already installed SCAs experimental bench by performing the robustness test of ECC devices against SPA as well as the estimate of its energy and dynamic power consumption. Both operations are tested: point multiplication over Koblitz curves and doubling points over binary curves, with respectively affine and projective coordinates. The experimental results and its comparison with simulation ones are presented. They can lead to accurate power evaluation with the maximum reached error less than 30%.",
"title": ""
},
{
"docid": "cfc0caeb9c00b375d930cde8f5eed66e",
"text": "Usability is an important and determinant factor in human-computer systems acceptance. Usability issues are still identified late in the software development process, during testing and deployment. One of the reasons these issues arise late in the process is that current requirements engineering practice does not incorporate usability perspectives effectively into software requirements specifications. The main strength of usability-focused software requirements is the clear visibility of usability aspects for both developers and testers. The explicit expression of these aspects of human-computer systems can be built for optimal usability and also evaluated effectively to uncover usability issues. This paper presents a design science-oriented research design to test the proposition that incorporating user modelling and usability modelling in software requirements specifications improves design. The proposal and the research design are expected to make a contribution to knowledge by theory testing and to practice with effective techniques to produce usable human computer systems.",
"title": ""
},
{
"docid": "5c74d0cfcbeaebc29cdb58a30436556a",
"text": "Modular decomposition is an effective means to achieve a complex system, but that of current part-component-based does not meet the needs of the positive development of the production. Design Structure Matrix (DSM) can simultaneously reflect the sequence, iteration, and feedback information, and express the parallel, sequential, and coupled relationship between DSM elements. This article, a modular decomposition method, named Design Structure Matrix Clustering modularize method, is proposed, concerned procedures are define, based on sorting calculate and clustering analysis of DSM, according to the rules of rows exchanges and columns exchange with the same serial number. The purpose and effectiveness of DSM clustering modularize method are confirmed through case study of assembly and calibration system for the large equipment.",
"title": ""
},
{
"docid": "c63465c12bbf8474293c839f9ad73307",
"text": "Maintaining the balance or stability of legged robots in natural terrains is a challenging problem. Besides the inherent unstable characteristics of legged robots, the sources of instability are the irregularities of the ground surface and also the external pushes. In this paper, a push recovery framework for restoring the robot balance against external unknown disturbances will be demonstrated. It is assumed that the magnitude of exerted pushes is not large enough to use a reactive stepping strategy. In the comparison with previous methods, which a simplified model such as point mass model is used as the model of the robot for studying the push recovery problem, the whole body dynamic model will be utilized in present work. This enhances the capability of the robot to exploit all of the DOFs to recover its balance. To do so, an explicit dynamic model of a quadruped robot will be derived. The balance controller is based on the computation of the appropriate acceleration of the main body. It is calculated to return the robot to its desired position after the perturbation. This acceleration should be chosen under the stability and friction conditions. To calculate main body acceleration, an optimization problem is defined so that the stability, friction condition considered as its constraints. The simulation results show the effectiveness of the proposed algorithm. The robot can restore its balance against the large disturbance solely through the adjustment of the position and orientation of main body.",
"title": ""
},
{
"docid": "dc2d2fe3c6dcbe57b257218029091d8c",
"text": "One motivation in the study of development is the discovery of mechanisms that may guide evolutionary change. Here we report how development governs relative size and number of cheek teeth, or molars, in the mouse. We constructed an inhibitory cascade model by experimentally uncovering the activator–inhibitor logic of sequential tooth development. The inhibitory cascade acts as a ratchet that determines molar size differences along the jaw, one effect being that the second molar always makes up one-third of total molar area. By using a macroevolutionary test, we demonstrate the success of the model in predicting dentition patterns found among murine rodent species with various diets, thereby providing an example of ecologically driven evolution along a developmentally favoured trajectory. In general, our work demonstrates how to construct and test developmental rules with evolutionary predictability in natural systems.",
"title": ""
},
{
"docid": "a83905ec368b96d1845f78f69e09edaa",
"text": "Fermented beverages hold a long tradition and contribution to the nutrition of many societies and cultures worldwide. Traditional fermentation has been empirically developed in ancient times as a process of raw food preservation and at the same time production of new foods with different sensorial characteristics, such as texture, flavour and aroma, as well as nutritional value. Low-alcoholic fermented beverages (LAFB) and non-alcoholic fermented beverages (NAFB) represent a subgroup of fermented beverages that have received rather little attention by consumers and scientists alike, especially with regard to their types and traditional uses in European societies. A literature review was undertaken and research articles, review papers and textbooks were searched in order to retrieve data regarding the dietary role, nutrient composition, health benefits and other relevant aspects of diverse ethnic LAFB and NAFB consumed by European populations. A variety of traditional LAFB and NAFB consumed in European regions, such as kefir, kvass, kombucha and hardaliye, are presented. Milk-based LAFB and NAFB are also available on the market, often characterised as 'functional' foods on the basis of their probiotic culture content. Future research should focus on elucidating the dietary role and nutritional value of traditional and 'functional' LAFB and NAFB, their potential health benefits and consumption trends in European countries. Such data will allow for LAFB and NAFB to be included in national food composition tables.",
"title": ""
},
{
"docid": "7c2960e9fd059e57b5a0172e1d458250",
"text": "The main goal of this research is to discover the structure of home appliances usage patterns, hence providing more intelligence in smart metering systems by taking into account the usage of selected home appliances and the time of their usage. In particular, we present and apply a set of unsupervised machine learning techniques to reveal specific usage patterns observed at an individual household. The work delivers the solutions applicable in smart metering systems that might: (1) contribute to higher energy awareness; (2) support accurate usage forecasting; and (3) provide the input for demand response systems in homes with timely energy saving recommendations for users. The results provided in this paper show that determining household characteristics from smart meter data is feasible and allows for quickly grasping general trends in data.",
"title": ""
},
{
"docid": "3bc7adca896ab0c18fd8ec9b8c5b3911",
"text": "Traditional algorithms to design hand-crafted features for action recognition have been a hot research area in last decade. Compared to RGB video, depth sequence is more insensitive to lighting changes and more discriminative due to its capability to catch geometric information of object. Unlike many existing methods for action recognition which depend on well-designed features, this paper studies deep learning-based action recognition using depth sequences and the corresponding skeleton joint information. Firstly, we construct a 3Dbased Deep Convolutional Neural Network (3DCNN) to directly learn spatiotemporal features from raw depth sequences, then compute a joint based feature vector named JointVector for each sequence by taking into account the simple position and angle information between skeleton joints. Finally, support vector machine (SVM) classification results from 3DCNN learned features and JointVector are fused to take action recognition. Experimental results demonstrate that our method can learn feature representation which is time-invariant and viewpoint-invariant from depth sequences. The proposed method achieves comparable results to the state-of-the-art methods on the UTKinect-Action3D dataset and achieves superior performance in comparison to baseline methods on the MSR-Action3D dataset. We further investigate the generalization of the trained model by transferring the learned features from one dataset (MSREmail addresses: [email protected] (Zhi Liu), [email protected] (Chenyang Zhang), [email protected] (Yingli Tian) Preprint submitted to Image and Vision Computing April 11, 2016 Action3D) to another dataset (UTKinect-Action3D) without retraining and obtain very promising classification accuracy.",
"title": ""
},
{
"docid": "6696d9092ff2fd93619d7eee6487f867",
"text": "We propose an accelerated stochastic block coordinate descent algorithm for nonconvex optimization under sparsity constraint in the high dimensional regime. The core of our algorithm is leveraging both stochastic partial gradient and full partial gradient restricted to each coordinate block to accelerate the convergence. We prove that the algorithm converges to the unknown true parameter at a linear rate, up to the statistical error of the underlying model. Experiments on both synthetic and real datasets backup our theory.",
"title": ""
},
{
"docid": "355591ece281540fb696c1eff3df5698",
"text": "Online health communities are a valuable source of information for patients and physicians. However, such user-generated resources are often plagued by inaccuracies and misinformation. In this work we propose a method for automatically establishing the credibility of user-generated medical statements and the trustworthiness of their authors by exploiting linguistic cues and distant supervision from expert sources. To this end we introduce a probabilistic graphical model that jointly learns user trustworthiness, statement credibility, and language objectivity.\n We apply this methodology to the task of extracting rare or unknown side-effects of medical drugs --- this being one of the problems where large scale non-expert data has the potential to complement expert medical knowledge. We show that our method can reliably extract side-effects and filter out false statements, while identifying trustworthy users that are likely to contribute valuable medical information.",
"title": ""
},
{
"docid": "55eec4fc4a211cee6b735d1884310cc0",
"text": "Understanding driving behaviors is essential for improving safety and mobility of our transportation systems. Data is usually collected via simulator-based studies or naturalistic driving studies. Those techniques allow for understanding relations between demographics, road conditions and safety. On the other hand, they are very costly and time consuming. Thanks to the ubiquity of smartphones, we have an opportunity to substantially complement more traditional data collection techniques with data extracted from phone sensors, such as GPS, accelerometer gyroscope and camera. We developed statistical models that provided insight into driver behavior in the San Francisco metro area based on tens of thousands of driver logs. We used novel data sources to support our work. We used cell phone sensor data drawn from five hundred drivers in San Francisco to understand the speed of traffic across the city as well as the maneuvers of drivers in different areas. Specifically, we clustered drivers based on their driving behavior. We looked at driver norms by street and flagged driving behaviors that deviated from the norm.",
"title": ""
},
{
"docid": "bb19e6b00fca27c455316f09a626407c",
"text": "On the basis of the most recent epidemiologic research, Autism Spectrum Disorder (ASD) affects approximately 1% to 2% of all children. (1)(2) On the basis of some research evidence and consensus, the Modified Checklist for Autism in Toddlers isa helpful tool to screen for autism in children between ages 16 and 30 months. (11) The Diagnostic Statistical Manual of Mental Disorders, Fourth Edition, changes to a 2-symptom category from a 3-symptom category in the Diagnostic Statistical Manual of Mental Disorders, Fifth Edition(DSM-5): deficits in social communication and social interaction are combined with repetitive and restrictive behaviors, and more criteria are required per category. The DSM-5 subsumes all the previous diagnoses of autism (classic autism, Asperger syndrome, and pervasive developmental disorder not otherwise specified) into just ASDs. On the basis of moderate to strong evidence, the use of applied behavioral analysis and intensive behavioral programs has a beneficial effect on language and the core deficits of children with autism. (16) Currently, minimal or no evidence is available to endorse most complementary and alternative medicine therapies used by parents, such as dietary changes (gluten free), vitamins, chelation, and hyperbaric oxygen. (16) On the basis of consensus and some studies, pediatric clinicians should improve their capacity to provide children with ASD a medical home that is accessible and provides family-centered, continuous, comprehensive and coordinated, compassionate, and culturally sensitive care. (20)",
"title": ""
},
{
"docid": "1f5557e647613f9b04a8fa3bdeb989df",
"text": "This research examined how individuals’ gendered avatar might alter their use of gender-based language (i.e., references to emotion, apologies, and tentative language) in text-based computer-mediated communication. Specifically, the experiment tested if men and women would linguistically assimilate a virtual gender identity intimated by randomly assigned gendered avatars (either matched or mismatched to their true gender). Results supported the notion that gender-matched avatars increase the likelihood of gender-typical language use, whereas gender-mismatched avatars promoted countertypical language, especially among women. The gender of a partner’s avatar, however, did not influence participants’ language. Results generally comport with self-categorization theory’s gender salience explanation of gender-based language use.",
"title": ""
},
{
"docid": "2f7862142f2c948db2be11bdaf8abc0b",
"text": "Interoperability is the capability of multiple parties and systems to collaborate and exchange information and matter to obtain their objectives. Interoperability challenges call for a model-based systems engineering approach. This paper describes a conceptual modeling framework for model-based interoperability engineering (MoBIE) for systems of systems, which integrates multilayered interoperability specification, modeling, architecting, design, and testing. Treating interoperability infrastructure as a system in its own right, MoBIE facilitates interoperability among agents, processes, systems, services, and interfaces. MoBIE is founded on ISO 19450 standard—object-process methodology, a holistic paradigm for modeling and architecting complex, dynamic, and multidisciplinary systems—and allows for synergistic integration of the interoperability model with system-centric models. We also discuss the implementation of MoBIE with the unified modeling language. We discuss the importance of interoperability in the civil aviation domain, and apply MoBIE to analyze the passenger departure process in an airport terminal as a case-in-point. The resulting model enables architectural and operational decision making and analysis at the system-of-systems level and adds significant value at the interoperability engineering program level.",
"title": ""
},
{
"docid": "7995a7f1e2b2182e6a092a095443e825",
"text": "Model-free reinforcement learning (RL) requires a large number of trials to learn a good policy, especially in environments with sparse rewards. We explore a method to improve the sample efficiency when we have access to demonstrations. Our approach, Backplay, uses a single demonstration to construct a curriculum for a given task. Rather than starting each training episode in the environment’s fixed initial state, we start the agent near the end of the demonstration and move the starting point backwards during the course of training until we reach the initial state. Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency. This includes reward shaping, behavioral cloning, and reverse curriculum generation.",
"title": ""
},
{
"docid": "348008a31aed772af9be03884fe6dbdc",
"text": "Human-Computer Speech is gaining momentum as a technique of computer interaction. There has been a recent upsurge in speech based search engines and assistants such as Siri, Google Chrome and Cortana. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to analyse speech, and intelligent responses can be found by designing an engine to provide appropriate human like responses. This type of programme is called a Chatbot, which is the focus of this study. This paper presents a survey on the techniques used to design Chatbots and a comparison is made between different design techniques from nine carefully selected papers according to the main methods adopted. These papers are representative of the significant improvements in Chatbots in the last decade. The paper discusses the similarities and differences in the techniques and examines in particular the Loebner prizewinning Chatbots. Keywords—AIML; Chatbot; Loebner Prize; NLP; NLTK; SQL; Turing Test",
"title": ""
},
{
"docid": "50875a63d0f3e1796148d809b5673081",
"text": "Coreference resolution seeks to find the mentions in text that refer to the same real-world entity. This task has been well-studied in NLP, but until recent years, empirical results have been disappointing. Recent research has greatly improved the state-of-the-art. In this review, we focus on five papers that represent the current state-ofthe-art and discuss how they relate to each other and how these advances will influence future work in this area.",
"title": ""
},
{
"docid": "851a966bbfee843e5ae1eaf21482ef87",
"text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.",
"title": ""
}
] |
scidocsrr
|
b4edd2559182bec8f3343903e5fd9a2b
|
Output Reachable Set Estimation and Verification for Multilayer Neural Networks
|
[
{
"docid": "4849bd4a5466b6a5837faabf5e16ab5e",
"text": "In this paper we evaluate state-of-the-art SMT solvers on encodings of verification problems involving Multi-Layer Perceptrons (MLPs), a widely used type of neural network. Verification is a key technology to foster adoption of MLPs in safety-related applications, where stringent requirements about performance and robustness must be ensured and demonstrated. In previous contributions, we have shown that safety problems for MLPs can be attacked by solving Boolean combinations of linear arithmetic constraints. However, the generated encodings are hard for current state-of-the-art SMT solvers, limiting our ability to verify MLPs in practice. The experimental results herewith presented are meant to provide the community with a precise picture of current achievements and standing open challenges in this intriguing application domain.",
"title": ""
}
] |
[
{
"docid": "cabdfcf94607adef9b07799aab463d64",
"text": "Monitoring the health of the elderly living independently in their own homes is a key issue in building sustainable healthcare models which support a country's ageing population. Existing approaches have typically proposed remotely monitoring the behaviour of a household's occupants through the use of additional sensors. However the costs and privacy concerns of such sensors have significantly limited their potential for widespread adoption. In contrast, in this paper we propose an approach which detects Activities of Daily Living, which we use as a proxy for the health of the household residents. Our approach detects appliance usage from existing smart meter data, from which the unique daily routines of the household occupants are learned automatically via a log Gaussian Cox process. We evaluate our approach using two real-world data sets, and show it is able to detect over 80% of kettle uses while generating less than 10% false positives. Furthermore, our approach allows earlier interventions in households with a consistent routine and fewer false alarms in the remaining households, relative to a fixed-time intervention benchmark.",
"title": ""
},
{
"docid": "a3774a953758e650077ac2a33613ff58",
"text": "We propose a deep convolutional neural network (CNN) method for natural image matting. Our method takes multiple initial alpha mattes of the previous methods and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs and reconstructed alpha mattes. Among the various existing methods, we focus on using two simple methods as initial alpha mattes: the closed-form matting and KNN matting. They are complementary to each other in terms of local and nonlocal principles. A major benefit of our method is that it can “recognize” different local image structures and then combine the results of local (closed-form matting) and nonlocal (KNN matting) mattings effectively to achieve higher quality alpha mattes than both of the inputs. Furthermore, we verify extendability of the proposed network to different combinations of initial alpha mattes from more advanced techniques such as KL divergence matting and information-flow matting. On the top of deep CNN matting, we build an RGB guided JPEG artifacts removal network to handle JPEG block artifacts in alpha matting. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. We perform deeper experiments including studies to evaluate the importance of balancing training data and to measure the effects of initial alpha mattes and also consider results from variant versions of the proposed network to analyze our proposed DCNN matting. In addition, our method achieved high ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors. Also, our RGB guided JPEG artifacts removal network restores the damaged alpha mattes from compressed images in JPEG format.",
"title": ""
},
{
"docid": "670b58d379b7df273309e55cf8e25db4",
"text": "In this paper, we introduce a new large-scale dataset of ships, called SeaShips, which is designed for training and evaluating ship object detection algorithms. The dataset currently consists of 31 455 images and covers six common ship types (ore carrier, bulk cargo carrier, general cargo ship, container ship, fishing boat, and passenger ship). All of the images are from about 10 080 real-world video segments, which are acquired by the monitoring cameras in a deployed coastline video surveillance system. They are carefully selected to mostly cover all possible imaging variations, for example, different scales, hull parts, illumination, viewpoints, backgrounds, and occlusions. All images are annotated with ship-type labels and high-precision bounding boxes. Based on the SeaShips dataset, we present the performance of three detectors as a baseline to do the following: 1) elementarily summarize the difficulties of the dataset for ship detection; 2) show detection results for researchers using the dataset; and 3) make a comparison to identify the strengths and weaknesses of the baseline algorithms. In practice, the SeaShips dataset would hopefully advance research and applications on ship detection.",
"title": ""
},
{
"docid": "46e7713a462f3d9bc896447c47cc4e5f",
"text": "The balanced scorecard (BSC) has developed as a very useful managerial tool from the mid1990s, and has met with general (and often enthusiastic) acceptance in both business and academic circles. In the knowledge-networked innovation economy of the early 21 century, which is increasingly characterized by globally integrated supply and demand chains, outsourcing of traditional business competencies (even innovation itself), and an emphasis on intellectual capital in contrast to physical capital, the BSC is now showing serious deficiencies. The tyranny of the BSC as a measurement ‘straightjacket’ is beginning to jeopardize the survival of firms, and hinders much-needed business ecosystem innovation, thereby negatively affecting customer value rejuvenation, shareholders’ benefits, and other stakeholders’ as well as societal benefits in general. This article traces the rationale, features, development and application of the BSC in the past ten years, and then provides a critical review of its key problematic effects on firms and their stakeholders in today’s changing business environment. Five major problem areas are identified and discussed, with selected business examples. An alternative to the BSC is proposed and motivated, involving drastic change in both the underlying assumptions of the BSC and moving from a systematic, single enterprise focus to a systemic, dynamic framework – a systemic management system, including a systemic scorecard.",
"title": ""
},
{
"docid": "8933d7d0f57a532ef27b9dbbb3727a88",
"text": "All people can not do as they plan, it happens because of their habits. Therefore, habits and moods may affect their productivity. Hence, the habits and moods are the important parts of person's life. Such habits may be analyzed with various machine learning techniques as available nowadays. Now the question of analyzing the Habits and moods of a person with a goal of increasing one's productivity comes to mind. This paper discusses one such technique called HDML (Habit Detection with Machine Learning). HDML model analyses the mood which helps us to deal with a bad mood or a state of unproductivity, through suggestions about such activities that alleviate our mood. The overall accuracy of the model is about 87.5 %.",
"title": ""
},
{
"docid": "b230400ee47b40751623561e11b1944c",
"text": "Many mHealth apps have been developed to assist people in self-care management. Most of them aim to engage users and provide motivation to increase adherence. Gamification has been introduced to identify the left and right brain drives in order to engage users and motivate them. We are using Octalysis framework to map how top rated stress management apps address the right brain drives. 12 stress management mHealth are classified based on this framework. In this paper, we explore how Gamification has been used in mHealth apps, the intrinsic motivation using self-determination theory, methodology, and findings. In the discussion, we identify design principles that will better suited to enhance intrinsic motivation for people who seek self-stress management.",
"title": ""
},
{
"docid": "430a4f02f2236dcf947a58c8bc70af99",
"text": "A system for online recognition of handwritten Tamil characters is presented. A handwritten character is constructed by executing a sequence of strokes. A structure- or shape-based representation of a stroke is used in which a stroke is represented as a string of shape features. Using this string representation, an unknown stroke is identified by comparing it with a database of strokes using a flexible string matching procedure. A full character is recognized by identifying all the component strokes. Character termination, is determined using a finite state automaton. Development of similar systems for other Indian scripts is outlined.",
"title": ""
},
{
"docid": "8cd8577a70729d03c1561df6a1fcbdbb",
"text": "Quantum computing is a new computational paradigm created by reformulating information and computation in a quantum mechanical framework [30, 27]. Since the laws of physics appear to be quantum mechanical, this is the most relevant framework to consider when considering the fundamental limitations of information processing. Furthermore, in recent decades we have seen a major shift from just observing quantum phenomena to actually controlling quantum mechanical systems. We have seen the communication of quantum information over long distances, the “teleportation” of quantum information, and the encoding and manipulation of quantum information in many different physical media. We still appear to be a long way from the implementation of a large-scale quantum computer, however it is a serious goal of many of the world’s leading physicists, and progress continues at a fast pace. In parallel with the broad and aggressive program to control quantum mechanical systems with increased precision, and to control and interact a larger number of subsystems, researchers have also been aggressively pushing the boundaries of what useful tasks one could perform with quantum mechanical devices. These in-",
"title": ""
},
{
"docid": "e3a571f98248af33fc700c6eefaf9641",
"text": "Two studies examined associations between social networking and depressive symptoms among youth. In Study 1, 384 participants (68% female; mean age = 20.22 years, SD = 2.90) were surveyed. In Study 2, 334 participants (62% female; M age = 19.44 years, SD = 2.05) were surveyed initially and 3 weeks later. Results indicated that depressive symptoms were associated with quality of social networking interactions, not quantity. There was some evidence that depressive rumination moderated associations, and both depressive rumination and corumination were associated with aspects of social networking usage and quality. Implications for understanding circumstances that increase social networking, as well as resulting negative interactions and negative affect are discussed.",
"title": ""
},
{
"docid": "3c29a0579a2f7d4f010b9b2f2df16e2c",
"text": "In recent years research on human activity recognition using wearable sensors has enabled to achieve impressive results on real-world data. However, the most successful activity recognition algorithms require substantial amounts of labeled training data. The generation of this data is not only tedious and error prone but also limits the applicability and scalability of today's approaches. This paper explores and systematically analyzes two different techniques to significantly reduce the required amount of labeled training data. The first technique is based on semi-supervised learning and uses self-training and co-training. The second technique is inspired by active learning. In this approach the system actively asks which data the user should label. With both techniques, the required amount of training data can be reduced significantly while obtaining similar and sometimes even better performance than standard supervised techniques. The experiments are conducted using one of the largest and richest currently available datasets.",
"title": ""
},
{
"docid": "3489e9d639223116cb4681959928a198",
"text": "The prevailing concept in modern cognitive neuroscience is that cognitive functions are performed predominantly at the network level, whereas the role of individual neurons is unlikely to extend beyond forming the simple basic elements of these networks. Within this conceptual framework, individuals of outstanding cognitive abilities appear as a result of a favorable configuration of the microarchitecture of the cognitive-implicated networks, whose final formation in ontogenesis may occur in a relatively random way. Here I suggest an alternative concept, which is based on neurological data and on data from human behavioral genetics. I hypothesize that cognitive functions are performed mainly at the intracellular, probably at the molecular level. Central to this hypothesis is the idea that the neurons forming the networks involved in cognitive processes are complex elements whose functions are not limited to generating electrical potentials and releasing neurotransmitters. According to this hypothesis, individuals of outstanding abilities are so due to a ‘lucky’ combination of specific genes that determine the intrinsic properties of neurons involved in cognitive functions of the brain.",
"title": ""
},
{
"docid": "581df8e68fdd475d1f0fab64335aa412",
"text": "In this paper, a method for Li-ion battery state of charge (SOC) estimation using particle filter (PF) is proposed. The equivalent circuit model for Li-ion battery is established based on the available battery block in MATLAB/Simulink. To improve the model's accuracy, the circuit parameters are represented by functions of SOC. Then, the PF algorithm is utilized to do SOC estimation for the battery model. From simulation it reveals that PF provides accurate SOC estimation. It is demonstrated that the proposed method is effective on Li-ion battery SOC estimation.",
"title": ""
},
{
"docid": "8027856b5e9fd0112a6b9950b2901ba5",
"text": "In order to make the Web services, Web applications in Java more powerful, flexible and user friendly, building unified Web applications is very significant. By introducing a new style-Representational State Transfer, this paper studied the goals and design principles of REST, the idea of REST and RESTful Web service design principles, RESTful style Web service, RESTful Web service frameworks in Java and the ways to develop RESTful Web service. The RESTful Web Service frameworks in Java can effectively simplify the Web development in many aspects.",
"title": ""
},
{
"docid": "470d2c319aaff0e9afcbd6deab56dca8",
"text": "BACKGROUND\nMotivation and job satisfaction have been identified as key factors for health worker retention and turnover in low- and middle-income countries. District health managers in decentralized health systems usually have a broadened 'decision space' that enables them to positively influence health worker motivation and job satisfaction, which in turn impacts on retention and performance at district-level. The study explored the effects of motivation and job satisfaction on turnover intention and how motivation and satisfaction can be improved by district health managers in order to increase retention of health workers.\n\n\nMETHODS\nWe conducted a cross-sectional survey in three districts of the Eastern Region in Ghana and interviewed 256 health workers from several staff categories (doctors, nursing professionals, allied health workers and pharmacists) on their intentions to leave their current health facilities as well as their perceptions on various aspects of motivation and job satisfaction. The effects of motivation and job satisfaction on turnover intention were explored through logistic regression analysis.\n\n\nRESULTS\nOverall, 69% of the respondents reported to have turnover intentions. Motivation (OR = 0.74, 95% CI: 0.60 to 0.92) and job satisfaction (OR = 0.74, 95% CI: 0.57 to 0.96) were significantly associated with turnover intention and higher levels of both reduced the risk of health workers having this intention. The dimensions of motivation and job satisfaction significantly associated with turnover intention included career development (OR = 0.56, 95% CI: 0.36 to 0.86), workload (OR = 0.58, 95% CI: 0.34 to 0.99), management (OR = 0.51. 95% CI: 0.30 to 0.84), organizational commitment (OR = 0.36, 95% CI: 0.19 to 0.66), and burnout (OR = 0.59, 95% CI: 0.39 to 0.91).\n\n\nCONCLUSIONS\nOur findings indicate that effective human resource management practices at district level influence health worker motivation and job satisfaction, thereby reducing the likelihood for turnover. Therefore, it is worth strengthening human resource management skills at district level and supporting district health managers to implement retention strategies.",
"title": ""
},
{
"docid": "e03d8f990cfcb07d8088681c3811b542",
"text": "The environments in which we live and the tasks we must perform to survive and reproduce have shaped the design of our perceptual systems through evolution and experience. Therefore, direct measurement of the statistical regularities in natural environments (scenes) has great potential value for advancing our understanding of visual perception. This review begins with a general discussion of the natural scene statistics approach, of the different kinds of statistics that can be measured, and of some existing measurement techniques. This is followed by a summary of the natural scene statistics measured over the past 20 years. Finally, there is a summary of the hypotheses, models, and experiments that have emerged from the analysis of natural scene statistics.",
"title": ""
},
{
"docid": "8ea44a793f57f036db0142cf51b12928",
"text": "This paper presents a comparative study of various classification methods in the application of automatic brain tumor segmentation. The data used in the study are 3D MRI volumes from MICCAI2016 brain tumor segmentation (BRATS) benchmark. 30 volumes are chosen randomly as a training set and 57 volumes are randomly chosen as a test set. The volumes are preprocessed and a feature vector is retrieved from each volume's four modalities (T1, T1 contrast-enhanced, T2 and Fluid-attenuated inversion recovery). The popular Dice score is used as an accuracy measure to record each classifier recognition results. All classifiers are implemented in the popular machine learning suit of algorithms, WEKA.",
"title": ""
},
{
"docid": "fc2046c92508cb0d6fe2b60c0eb8d2be",
"text": "Voting is an inherent process in a democratic society. Other methods for expressing the society participants’ will for example caucuses in US party elections or Landsgemeine in Switzerland can be inconvenient for the citizens and logistically difficult to organize. Furthermore, beyond inconvenience, there may be legitimate reasons for not being able to take part in the voting process, e.g. being deployed overseas in military or being on some other official assignment. Even more, filling in paper ballots and counting them is error-prone and time-consuming process. A well-known controversy took place during US presidental election in 2000 [Florida recount 2000], when a partial recount of the votes could have changed the outcome of the elections. As the recount was cancelled by the court, the actual result was not never known. Decline in elections’ participation rate has been observed in many old democracies [Summers 2016] and it should be the decision-makers goal to bring the electorate back to the polling booths. One way to do that would be to use internet voting. In this method, the ballots are cast using a personal computer or a smart phone and it sent over the internet to the election committee. However, there have been several critics against the internet voting methods [Springall et al. 2014]. In this report we consider, how to make internet voting protocols more secure by using blockchain.",
"title": ""
},
{
"docid": "24bb26da0ce658ff075fc89b73cad5af",
"text": "Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction.",
"title": ""
},
{
"docid": "0b777fa9b40050559826ec01285ea2ec",
"text": "Honeyd (N. Provos, 2004) is a popular tool developed by Niels Provos that offers a simple way to emulate services offered by several machines on a single PC. It is a so called low interaction honeypot. Responses to incoming requests are generated thanks to ad hoc scripts that need to be written by hand. As a result, few scripts exist, especially for services handling proprietary protocols. In this paper, we propose a method to alleviate these problems by automatically generating new scripts. We explain the method and describe its limitations. We analyze the quality of the generated scripts thanks to two different methods. On the one hand, we have launched known attacks against a machine running our scripts; on the other hand, we have deployed that machine on the Internet, next to a high interaction honeypot during two months. For those attackers that have targeted both machines, we can verify if our scripts have, or not, been able to fool them. We also discuss the various tuning parameters of the algorithm that can be set to either increase the quality of the script or, at the contrary, to reduce its complexity",
"title": ""
},
{
"docid": "30155768fd0b1b0950510487840defba",
"text": "Most cloud services are built with multi-tenancy which enables data and configuration segregation upon shared infrastructures. In this setting, a tenant temporarily uses a piece of virtually dedicated software, platform, or infrastructure. To fully benefit from the cloud, tenants are seeking to build controlled and secure collaboration with each other. In this paper, we propose a Multi-Tenant Role-Based Access Control (MT-RBAC) model family which aims to provide fine-grained authorization in collaborative cloud environments by building trust relations among tenants. With an established trust relation in MT-RBAC, the trustee can precisely authorize cross-tenant accesses to the truster's resources consistent with constraints over the trust relation and other components designated by the truster. The users in the trustee may restrictively inherit permissions from the truster so that multi-tenant collaboration is securely enabled. Using SUN's XACML library, we prototype MT-RBAC models on a novel Authorization as a Service (AaaS) platform with the Joyent commercial cloud system. The performance and scalability metrics are evaluated with respect to an open source cloud storage system. The results show that our prototype incurs only 0.016 second authorization delay for end users on average and is scalable in cloud environments.",
"title": ""
}
] |
scidocsrr
|
0c9fef946ee7f60dcb132b0b5be3b15a
|
On How to Design Dataflow FPGA-Based Accelerators for Convolutional Neural Networks
|
[
{
"docid": "3655319a1d2ff7f4bc43235ba02566bd",
"text": "In high-performance systems, stencil computations play a crucial role as they appear in a variety of different fields of application, ranging from partial differential equation solving, to computer simulation of particles’ interaction, to image processing and computer vision. The computationally intensive nature of those algorithms created the need for solutions to efficiently implement them in order to save both execution time and energy. This, in combination with their regular structure, has justified their widespread study and the proposal of largely different approaches to their optimization.\n However, most of these works are focused on aggressive compile time optimization, cache locality optimization, and parallelism extraction for the multicore/multiprocessor domain, while fewer works are focused on the exploitation of custom architectures to further exploit the regular structure of Iterative Stencil Loops (ISLs), specifically with the goal of improving power efficiency.\n This work introduces a methodology to systematically design power-efficient hardware accelerators for the optimal execution of ISL algorithms on Field-programmable Gate Arrays (FPGAs). As part of the methodology, we introduce the notion of Streaming Stencil Time-step (SST), a streaming-based architecture capable of achieving both low resource usage and efficient data reuse thanks to an optimal data buffering strategy, and we introduce a technique called SSTs queuing that is capable of delivering a pseudolinear execution time speedup with constant bandwidth.\n The methodology has been validated on significant benchmarks on a Virtex-7 FPGA using the Xilinx Vivado suite. Results demonstrate how the efficient usage of the on-chip memory resources realized by an SST allows one to treat problem sizes whose implementation would otherwise not be possible via direct synthesis of the original, unmanipulated code via High-Level Synthesis (HLS). We also show how the SSTs queuing effectively ensures a pseudolinear throughput speedup while consuming constant off-chip bandwidth.",
"title": ""
},
{
"docid": "5c8c391a10f32069849d743abc5e8210",
"text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.",
"title": ""
}
] |
[
{
"docid": "2578d87c9d30187566a46586acaa6d09",
"text": "Abstract Virtual reality (VR)-based therapy has emerged as a potentially useful means to treat post-traumatic stress disorder (PTSD), but randomized studies have been lacking for Service Members from Iraq or Afghanistan. This study documents a small, randomized, controlled trial of VR-graded exposure therapy (VR-GET) versus treatment as usual (TAU) for PTSD in Active Duty military personnel with combat-related PTSD. Success was gauged according to whether treatment resulted in a 30 percent or greater improvement in the PTSD symptom severity as assessed by the Clinician Administered PTSD Scale (CAPS) after 10 weeks of treatment. Seven of 10 participants improved by 30 percent or greater while in VR-GET, whereas only 1 of the 9 returning participants in TAU showed similar improvement. This is a clinically and statistically significant result (χ(2) = 6.74, p < 0.01, relative risk 3.2). Participants in VR-GET improved an average of 35 points on the CAPS, whereas those in TAU averaged a 9-point improvement (p < 0.05). The results are limited by small size, lack of blinding, a single therapist, and comparison to a relatively uncontrolled usual care condition, but did show VR-GET to be a safe and effective treatment for combat-related PTSD.",
"title": ""
},
{
"docid": "cf1e0d6a07674aa0b4c078550b252104",
"text": "Industry-practiced agile methods must become an integral part of a software engineering curriculum. It is essential that graduates of such programs seeking careers in industry understand and have positive attitudes toward agile principles. With this knowledge they can participate in agile teams and apply these methods with minimal additional training. However, learning these methods takes experience and practice, both of which are difficult to achieve in a direct manner within the constraints of an academic program. This paper presents a novel, immersive boot camp approach to learning agile software engineering concepts with LEGO® bricks as the medium. Students construct a physical product while inductively learning the basic principles of agile methods. The LEGO®-based approach allows for multiple iterations in an active learning environment. In each iteration, students inductively learn agile concepts through their experiences and mistakes. Subsequent iterations then ground these concepts, visibly leading to an effective process. We assessed this approach using a combination of quantitative and qualitative methods. Our assessment shows that the students demonstrated positive attitudes toward the boot-camp approach compared to lecture-based instruction. However, the agile boot camp did not have an effect on the students' recall on class tests when compared to their recall of concepts taught in lecture-based instruction.",
"title": ""
},
{
"docid": "f6a4150a95acce7ef28ec2739a87c00e",
"text": "There is an exciting natural match between Social Network Analysis (SNA) and the growth of social interaction through digital platforms and technologies, from online communities to corporate information systems. This convergence offers a combination of exciting domains, interesting research questions, appropriate analysis techniques and the availability of copious data. Agarwal et al. (2008) put it thus, \"Most transactions and conversations in these online groups leave a digital trace ... this research data makes visible social processes that are much more difficult to study in conventional organizational settings.\" The availability of such data, together with exciting domains and an appropriate analysis technique, form a golden opportunity for research, perhaps even a “21 Century Science” (Watts, 2007).",
"title": ""
},
{
"docid": "a537edc6579892249d157e2dc2f31077",
"text": "An efficient decoupling feeding network is proposed in this letter. It is composed of two directional couplers and two sections of transmission line for connection use. By connecting the two couplers, an indirect coupling with controlled magnitude and phase is introduced, which can be used to cancel out the direct coupling caused by space waves and surface waves between array elements. To demonstrate the method, a two-element microstrip antenna array with the proposed network has been designed, fabricated and measured. Both simulated and measured results have simultaneously proved that the proposed method presents excellent decoupling performance. The measured mutual coupling can be reduced to below -58 dB at center frequency. Meanwhile it has little influence on return loss and radiation patterns. The decoupling mechanism is simple and straightforward which can be easily applied in phased array antennas and MIMO systems.",
"title": ""
},
{
"docid": "a96d6649a2274a919fbeb5b2221d69c6",
"text": "In this paper, a novel center frequency and bandwidth tunable, cross-coupled waveguide resonator filter is presented. The coupling between adjacent resonators can be adjusted using non-resonating coupling resonators. The negative sign for the cross coupling, which is required to generate transmission zeros, is enforced by choosing an appropriate resonant frequency for the cross-coupling resonator. The coupling iris design itself is identical regardless of the sign of the coupling. The design equations for the novel coupling elements are given in this paper. A four pole filter breadboard with two transmission zeros (elliptic filter function) has been built up and measured at various bandwidth and center frequency settings. It operates at Ka-band frequencies and can be tuned to bandwidths from 36 to 72 MHz in the frequency range 19.7-20.2 GHz.",
"title": ""
},
{
"docid": "235e192cc8d0e7e020d5bde490ead034",
"text": "We propose a simple and general variant of the standard reparameterized gradient estimator for the variational evidence lower bound. Specifically, we remove a part of the total derivative with respect to the variational parameters that corresponds to the score function. Removing this term produces an unbiased gradient estimator whose variance approaches zero as the approximate posterior approaches the exact posterior. We analyze the behavior of this gradient estimator theoretically and empirically, and generalize it to more complex variational distributions such as mixtures and importance-weighted posteriors.",
"title": ""
},
{
"docid": "021c28607fc49aa9ec258a3cc2b1bf85",
"text": "Lymphatic vasculature is increasingly recognized as an important factor both in the regulation of normal tissue homeostasis and immune response and in many diseases, such as inflammation, cancer, obesity, and hypertension. In the last few years, in addition to the central role of vascular endothelial growth factor (VEGF)-C/VEGF receptor-3 signaling in lymphangiogenesis, significant new insights were obtained about Notch, transforming growth factor β/bone morphogenetic protein, Ras, mitogen-activated protein kinase, phosphatidylinositol 3 kinase, and Ca(2+)/calcineurin signaling pathways in the control of growth and remodeling of lymphatic vessels. An emerging picture of lymphangiogenic signaling is complex and in many ways distinct from the regulation of angiogenesis. This complexity provides new challenges, but also new opportunities for selective therapeutic targeting of lymphatic vasculature.",
"title": ""
},
{
"docid": "6478097f207482543c0db12b518be82b",
"text": "What is a good test case? One that reveals potential defects with good cost-effectiveness. We provide a generic model of faults and failures, formalize it, and present its various methodological usages for test case generation.",
"title": ""
},
{
"docid": "1242b663aa025f7041d4dda527f9de56",
"text": "Automatic forecasting of time series data is a challenging problem in many industries. Current forecast models adopted by businesses do not provide adequate means for including data representing external factors that may have a significant impact on the time series, such as weather, national events, local events, social media trends, promotions, etc. This paper introduces a novel neural network attention mechanism that naturally incorporates data from multiple external sources without the feature engineering needed to get other techniques to work. We demonstrate empirically that the proposed model achieves superior performance for predicting the demand of 20 commodities across 107 stores of one of America’s largest retailers when compared to other baseline models, including neural networks, linear models, certain kernel methods, Bayesian regression, and decision trees. Our method ultimately accounts for a 23.9% relative improvement as a result of the incorporation of external data sources, and provides an unprecedented level of descriptive ability for a neural network forecasting model.",
"title": ""
},
{
"docid": "0b48472cd68c53c48b7c895b0b8fd8af",
"text": "We study the problem of finding sentences that explain the relationship between a named entity and an ad-hoc query, which we refer to as entity support sentences. This is an important sub-problem of entity ranking which, to the best of our knowledge, has not been addressed before. In this paper we give the first formalization of the problem, how it can be evaluated, and present a full evaluation dataset. We propose several methods to rank these sentences, namely retrieval-based, entity-ranking based and position-based. We found that traditional bag-of-words models perform relatively well when there is a match between an entity and a query in a given sentence, but they fail to find a support sentence for a substantial portion of entities. This can be improved by incorporating small windows of context sentences and ranking them appropriately.",
"title": ""
},
{
"docid": "6db790d4d765b682fab6270c5930bead",
"text": "Geophysical applications of radar interferometry to measure changes in the Earth's surface have exploded in the early 1990s. This new geodetic technique calculates the interference pattern caused by the difference in phase between two images acquired by a spaceborne synthetic aperture radar at two distinct times. The resulting interferogram is a contour map of the change in distance between the ground and the radar instrument. These maps provide an unsurpassed spatial sampling density (---100 pixels km-2), a competitive precision (---1 cm), and a useful observation cadence (1 pass month-•). They record movements in the crust, perturbations in the atmosphere, dielectric modifications in the soil, and relief in the topography. They are also sensitive to technical effects, such as relative variations in the radar's trajectory or variations in its frequency standard. We describe how all these phenomena contribute to an interferogram. Then a practical summary explains the techniques for calculating and manipulating interferograms from various radar instruments, including the four satellites currently in orbit: ERS-1, ERS-2, JERS-1, and RADARSAT. The next chapter suggests some guidelines for interpreting an interferogram as a geophysical measurement: respecting the limits of the technique, assessing its uncertainty, recognizing artifacts, and discriminating different types of signal. We then review the geophysical applications published to date, most of which study deformation related to earthquakes, volcanoes, and glaciers using ERS-1 data. We also show examples of monitoring natural hazards and environmental alterations related to landslides, subsidence, and agriculture. In addition, we consider subtler geophysical signals such as postseismic relaxation, tidal loading of coastal areas, and interseismic strain accumulation. We conclude with our perspectives on the future of radar interferometry. The objective of the review is for the reader to develop the physical understanding necessary to calculate an interferogram and the geophysical intuition necessary to interpret it.",
"title": ""
},
{
"docid": "722e8a04db2e6fa48623a68ccf93d2af",
"text": "This study exhibits the application of the concept of matrices, probability and optimization in making an electronic Tic-Tac-Toe game using logic gates and exhibiting the concept of Boolean algebra. For a finite number of moves in every single game of Tic-Tac-Toe, the moves are recorded in a 3×3 matrix and the subsequent solution, or a winning combination, is presented from the data obtained by playing the electronic game. The solution is also displayed electronically using an LED. The circuit has been designed in a way to apply Boolean logic to analyze player's moves and thus, give a corresponding output from the electronic game and use it in matrices. The electronic Tic-Tac-Toe game is played randomly between 20 pairs of players. The impact of different opening moves is observed. Also, effect of different strategies, aggressive or defensive, on the outcome of the game is explored. The concept of Boolean algebra, logic gates, matrices and probability is applied in this game to make the foundation of the logic for this game. The most productive position for placing an `X' or `O' is found out using probabilities. Also the most effective blocking move is found out through which a player placing `O' can block `X' from winning. The skills help in understanding what strategy can be implemented to be on the winning side. The study is developed with an attempt to realistically model a tic-tac-toe game, and help in reflecting major tendencies. This knowledge helps in understanding what strategy to implement to be on the winning side.",
"title": ""
},
{
"docid": "dee37431ec24aae3fd8c9e43a4f9f93e",
"text": "We present a new feature representation method for scene text recognition problem, particularly focusing on improving scene character recognition. Many existing methods rely on Histogram of Oriented Gradient (HOG) or part-based models, which do not span the feature space well for characters in natural scene images, especially given large variation in fonts with cluttered backgrounds. In this work, we propose a discriminative feature pooling method that automatically learns the most informative sub-regions of each scene character within a multi-class classification framework, whereas each sub-region seamlessly integrates a set of low-level image features through integral images. The proposed feature representation is compact, computationally efficient, and able to effectively model distinctive spatial structures of each individual character class. Extensive experiments conducted on challenging datasets (Chars74K, ICDAR'03, ICDAR'11, SVT) show that our method significantly outperforms existing methods on scene character classification and scene text recognition tasks.",
"title": ""
},
{
"docid": "8321eecac6f8deb25ffd6c1b506c8ee3",
"text": "Propelled by a fast evolving landscape of techniques and datasets, data science is growing rapidly. Against this background, topological data analysis (TDA) has carved itself a niche for the analysis of datasets that present complex interactions and rich structures. Its distinctive feature, topology, allows TDA to detect, quantify and compare the mesoscopic structures of data, while also providing a language able to encode interactions beyond networks. Here we briefly present the TDA paradigm and some applications, in order to highlight its relevance to the data science community.",
"title": ""
},
{
"docid": "ab989f39a5dd2ba3c98c0ffddd5c85cb",
"text": "This paper proposes a revision of the multichannel concept as it has been applied in previous studies on multichannel commerce. Digitalization and technological innovations have blurred the line between physical and electronic channels. A structured literature review on multichannel consumer and firm behaviour is conducted to reveal the established view on multichannel phenomena. By providing empirical evidence on market offerings and consumer perceptions, we expose a significant mismatch between the dominant conceptualization of multichannel commerce applied in research and today’s market realities. This tension highlights the necessity for a changed view on multichannel commerce to study and understand phenomena in converging sales channels. Therefore, an extended conceptualization of multichannel commerce, named the multichannel continuum, is proposed. This is the first study that considers the broad complexity of integrated multichannel decisions. It aims at contributing to the literature on information systems and channel choice by developing a reference frame for studies on how technological advancements that allow the integration of different channels shape consumer and firm decision making in multichannel commerce. Accordingly, a brief research agenda contrasts established findings with unanswered questions, challenges and opportunities that arise in this more complex multichannel market environment.",
"title": ""
},
{
"docid": "4e8131e177330af2fb8999c799508b58",
"text": "Unmanned aerial vehicles (UAVs) such as multi-copters are expected to be used for inspection of aged infrastructure or for searching damaged buildings in the event of a disaster. However, in a confined space in such environments, UAVs suffer a high risk of falling as a result of contact with an obstacle. To ensure an aerial inspection in the confined space, we have proposed a UAV with a passive rotating spherical shell (PRSS UAV); The UAV and the spherical shell are connected by a 3DOF gimbal mechanism to allow them to rotate in all directions independently, so that the UAV can maintain its flight stability during a collision with an obstacle because only the shell is disturbed and rotated. To apply the PRSS UAV into real-world missions, we have to carefully choose many design parameters such as weight, structure, diameter, strength of the spherical shell, axis configuration of the gimbal, and model of the UAV. In this paper, we propose a design strategy for applying the concept of the PRSS mechanism, focusing on disaster response and infrastructure inspection. We also demonstrate the validity of this approach by the successful result of quantitative experiments and practical field tests.",
"title": ""
},
{
"docid": "8bcc51e311ab55fab6a4f60e6271716b",
"text": "An approach for the semi-automated recovery of traceability links between software documentation and source code is presented. The methodology is based on the application of information retrieval techniques to extract and analyze the semantic information from the source code and associated documentation. A semi-automatic process is defined based on the proposed methodology. The paper advocates the use of latent semantic indexing (LSI) as the supporting information retrieval technique. Two case studies using existing software are presented comparing this approach with others. The case studies show positive results for the proposed approach, especially considering the flexibility of the methods used.",
"title": ""
},
{
"docid": "d4f575851c5912cdac01efac514e1d56",
"text": "On line analytical processing (OLAP) is an essential element of decision-support systems. OLAP tools provide insights and understanding needed for improved decision making. However, the answers to OLAP queries can be biased and lead to perplexing and incorrect insights. In this paper, we propose, a system to detect, explain, and to resolve bias in decision-support queries. We give a simple definition of a biased query, which performs a set of independence tests on the data to detect bias. We propose a novel technique that gives explanations for bias, thus assisting an analyst in understanding what goes on. Additionally, we develop an automated method for rewriting a biased query into an unbiased query, which shows what the analyst intended to examine. In a thorough evaluation on several real datasets we show both the quality and the performance of our techniques, including the completely automatic discovery of the revolutionary insights from a famous 1973 discrimination case.",
"title": ""
},
{
"docid": "d8fc5a8bc075343b2e70a9b441ecf6e5",
"text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.",
"title": ""
},
{
"docid": "c35db6f50a6ca89d45172faf0332946a",
"text": "Mobile commerce had been expected to become a major force of e-commerce in the 21st century. However, the rhetoric has far exceeded the reality so far. While academics and practitioners have presented many views about the lack of rapid growth of mobile commerce, we submit that the anticipated mobile commerce take-off hinges on the emergence of a few killer apps. After reviewing the recent history of technologies that have dramatically changed our way of life and work, we propose a set of criteria for identifying and evaluating killer apps. From this vantage point, we argue that mobile payment and banking are the most likely candidates for the killer apps that could bring the expectation of a world of ubiquitous mobile commerce to fruition. Challenges and opportunities associated with this argument are discussed.",
"title": ""
}
] |
scidocsrr
|
a1a6ee5c1d83166619656b9a51c222d2
|
Crowdsourced time-sync video tagging using temporal and personalized topic modeling
|
[
{
"docid": "42b5d245a0f18cbb532e7f2f890a0de4",
"text": "A natural evaluation metric for statistical topic models is the probability of held-out documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean method and empirical likelihood method. In this paper, we demonstrate experimentally that commonly-used methods are unlikely to accurately estimate the probability of held-out documents, and propose two alternative methods that are both accurate and efficient.",
"title": ""
},
{
"docid": "eae92d06d00d620791e6b247f8e63c36",
"text": "Tagging systems have become major infrastructures on the Web. They allow users to create tags that annotate and categorize content and share them with other users, very helpful in particular for searching multimedia content. However, as tagging is not constrained by a controlled vocabulary and annotation guidelines, tags tend to be noisy and sparse. Especially new resources annotated by only a few users have often rather idiosyncratic tags that do not reflect a common perspective useful for search. In this paper we introduce an approach based on Latent Dirichlet Allocation (LDA) for recommending tags of resources in order to improve search. Resources annotated by many users and thus equipped with a fairly stable and complete tag set are used to elicit latent topics to which new resources with only a few tags are mapped. Based on this, other tags belonging to a topic can be recommended for the new resource. Our evaluation shows that the approach achieves significantly better precision and recall than the use of association rules, suggested in previous work, and also recommends more specific tags. Moreover, extending resources with these recommended tags significantly improves search for new resources.",
"title": ""
},
{
"docid": "209de57ac23ab35fa731b762a10f782a",
"text": "Although fully generative models have been successfully used to model the contents of text documents, they are often awkward to apply to combinations of text data and document metadata. In this paper we propose a Dirichlet-multinomial regression (DMR) topic model that includes a log-linear prior on document-topic distributions that is a function of observed features of the document, such as author, publication venue, references, and dates. We show that by selecting appropriate features, DMR topic models can meet or exceed the performance of several previously published topic models designed for specific data.",
"title": ""
}
] |
[
{
"docid": "cc64adfeed5dcc457e03bd03efcd03ba",
"text": "This work presents methods for path planning and obstacle avoidance for the humanoid robot QRIO, allowing the robot to autonomously walk around in a home environment. For an autonomous robot, obstacle detection and localization as well as representing them in a map are crucial tasks for the success of the robot. Our approach is based on plane extraction from data captured by a stereo-vision system that has been developed specifically for QRIO. We briefly overview the general software architecture composed of perception, short and long term memory, behavior control, and motion control, and emphasize on our methods for obstacle detection by plane extraction, occupancy grid mapping, and path planning. Experimental results complete the description of our system.",
"title": ""
},
{
"docid": "61cbdff852aa544a6f7cc57bc76903ff",
"text": "Modern natural language processing and understanding applications have enjoyed a great boost utilizing neural networks models. However, this is not the case for most languages especially low-resource ones with insufficient annotated training data. Cross-lingual transfer learning methods improve the performance on a lowresource target language by leveraging labeled data from other (source) languages, typically with the help of cross-lingual resources such as parallel corpora. In this work, we propose the first zero-resource multilingual transfer learning model1 that can utilize training data in multiple source languages, while not requiring target language training data nor cross-lingual supervision. Unlike existing methods that only rely on language-invariant features for cross-lingual transfer, our approach utilizes both language-invariant and language-specific features in a coherent way. Our model leverages adversarial networks to learn language-invariant features and mixture-of-experts models to dynamically exploit the relation between the target language and each individual source language. This enables our model to learn effectively what to share between various languages in the multilingual setup. It results in significant performance gains over prior art, as shown in an extensive set of experiments over multiple text classification and sequence tagging tasks including a large-scale real-world industry dataset.",
"title": ""
},
{
"docid": "687dbb03f675f0bf70e6defa9588ae23",
"text": "This paper presents a novel method for discovering causal relations between events encoded in text. In order to determine if two events from the same sentence are in a causal relation or not, we first build a graph representation of the sentence that encodes lexical, syntactic, and semantic information. In a second step, we automatically extract multiple graph patterns (or subgraphs) from such graph representations and sort them according to their relevance in determining the causality between two events from the same sentence. Finally, in order to decide if these events are causal or not, we train a binary classifier based on what graph patterns can be mapped to the graph representation associated with the two events. Our experimental results show that capturing the feature dependencies of causal event relations using a graph representation significantly outperforms an existing method that uses a flat representation of features.",
"title": ""
},
{
"docid": "6e197b28345fad3b76cde4e1cbfa392a",
"text": "This paper presents a novel ultra-low-power dual-phase current-mode relaxation oscillator, which produces a 122 kHz digital clock and has total power consumption of 14.4 nW at 0.6 V. Its frequency dependence is 327 ppm/°C over a temperature range of -20° C to 100° C, and its supply voltage coefficient is ±3.0%/V from 0.6 V to 1.8 V. The proposed oscillator is fabricated in 0.18 μm CMOS technology and occupies 0.03 mm2. At room temperature it achieves a figure of merit of 120 pW/kHz, making it one of the most efficient relaxation oscillators reported to date.",
"title": ""
},
{
"docid": "a9975365f0bad734b77b67f63bdf7356",
"text": "Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.",
"title": ""
},
{
"docid": "3e1690ae4d61d87edb0e4c3ce40f6a88",
"text": "Despite previous efforts in auditing software manually and automatically, buffer overruns are still being discovered in programs in use. A dynamic bounds checker detects buffer overruns in erroneous software before it occurs and thereby prevents attacks from corrupting the integrity of the system. Dynamic buffer overrun detectors have not been adopted widely because they either (1) cannot guard against all buffer overrun attacks, (2) break existing code, or (3) incur too high an overhead. This paper presents a practical detector called CRED (C Range Error Detector) that avoids each of these deficiencies. CRED finds all buffer overrun attacks as it directly checks for the bounds of memory accesses. Unlike the original referent-object based bounds-checking technique, CRED does not break existing code because it uses a novel solution to support program manipulation of out-of-bounds addresses. Finally, by restricting the bounds checks to strings in a program, CRED’s overhead is greatly reduced without sacrificing protection in the experiments we performed. CRED is implemented as an extension of the GNU C compiler version 3.3.1. The simplicity of our design makes possible a robust implementation that has been tested on over 20 open-source programs, comprising over 1.2 million lines of C code. CRED proved effective in detecting buffer overrun attacks on programs with known vulnerabilities, and is the only tool found to guard against a testbed of 20 different buffer overflow attacks[34]. Finding overruns only on strings impose an overhead of less This research was performed while the first author was at Stanford University, and this material is based upon work supported in part by the National Science Foundation under Grant No. 0086160. than 26% for 14 of the programs, and an overhead of up to 130% for the remaining six, while the previous state-ofthe-art bounds checker by Jones and Kelly breaks 60% of the programs and is 12 times slower. Incorporating wellknown techniques for optimizing bounds checking into CRED could lead to further performance improvements.",
"title": ""
},
{
"docid": "77da7651b0e924d363c859d926e8c9da",
"text": "Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons’ schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating ‘task highlights’ which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data—sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.",
"title": ""
},
{
"docid": "7be6ee5dee7fc6b64da29e0b60814fee",
"text": "J. P. Guilford (1950) asked in his inaugural address to the American Psychological Association why schools were not producing more creative persons. He also asked, “Why is there so little apparent correlation between education and creative productiveness” (p. 444)? This article presents a review of past and current research on the relation of education to creativity in students of preschool age through age 16 in U.S. public schools. Several models of creative thinking are presented (e.g., Guilford, 1985; Renzulli, 1992; Runco & Chand, 1995), as well as techniques for developing creativity (e.g., Davis, 1982; Sternberg & Williams, 1996). Some research presented indicates a relation between creativity and learning (e.g., Karnes et al., 1961; Torrance, 1981). Implications for research and practice",
"title": ""
},
{
"docid": "e2262042d05f56796be0a81373e01b2f",
"text": "Server virtualization is a fundamental technological innovation that is used extensively in IT enterprises. Server virtualization enables creation of multiple virtual machines on single underlying physical machine. It is realized either in form of hypervisors or containers. Hypervisor is an extra layer of abstraction between the hardware and virtual machines that emulates underlying hardware. In contrast, the more recent container-based virtualization technology runs on host kernel without additional layer of abstraction. Thus container technology is expected to provide near native performance compared to hypervisor based technology. We have conducted a series of experiments to measure and compare the performance of workloads over hypervisor based virtual machines, Docker containers and native bare metal machine. We use a standard benchmark workload suite that stresses CPU, memory, disk IO and system. The results obtained show that Docker containers provide better or similar performance compared to traditional hypervisor based virtual machines in almost all the tests. However as expected the native system still provides the best performance as compared to either containers or hypervisors.",
"title": ""
},
{
"docid": "5a9d0e5046129bbdad435980f125db37",
"text": "The impact of channel width scaling on low-frequency noise (LFN) and high-frequency performance in multifinger MOSFETs is reported in this paper. The compressive stress from shallow trench isolation (STI) cannot explain the lower LFN in extremely narrow devices. STI top corner rounding (TCR)-induced Δ<i>W</i> is identified as an important factor that is responsible for the increase in transconductance <i>Gm</i> and the reduction in LFN with width scaling to nanoscale regime. A semi-empirical model was derived to simulate the effective mobility (μ<sub>eff</sub>) degradation from STI stress and the increase in effective width (<i>W</i><sub>eff</sub>) from Δ<i>W</i> due to STI TCR. The proposed model can accurately predict width scaling effect on <i>Gm</i> based on a tradeoff between μ<sub>eff</sub> and <i>W</i><sub>eff</sub>. The enhanced STI stress may lead to an increase in interface traps density (<i>N</i><sub>it</sub>), but the influence is relatively minor and can be compensated by the <i>W</i><sub>eff</sub> effect. Unfortunately, the extremely narrow devices suffer <i>fT</i> degradation due to an increase in <i>C</i><sub>gg</sub>. The investigation of impact from width scaling on μ<sub>eff</sub>, <i>Gm</i>, and LFN, as well as the tradeoff between LFN and high-frequency performance, provides an important layout guideline for analog and RF circuit design.",
"title": ""
},
{
"docid": "75d76315376a1770c4be06d420a0bf96",
"text": "Motor vehicles greatly influence human life but are also a major cause of death and road congestion, which is an obstacle to future economic development. We believe that by learning driving patterns, useful navigation support can be provided for drivers. In this paper, we present a simple and reliable method for the recognition of driving events using hidden Markov models (HMMs), popular stochastic tools for studying time series data. A data acquisition system was used to collect longitudinal and lateral acceleration and speed data from a real vehicle in a normal driving environment. Data were filtered, normalized, segmented, and quantified to obtain the symbolic representation necessary for use with discrete HMMs. Observation sequences for training and evaluation were manually selected and classified as events of a particular type. An appropriate model size was selected, and the model was trained for each type of driving events. Observation sequences from the training set were evaluated by multiple models, and the highest probability decides what kind of driving event this sequence represents. The recognition results showed that HMMs could recognize driving events very accurately and reliably.",
"title": ""
},
{
"docid": "58b5bf62497220a3f27dc0b4a89d851e",
"text": "Applications and systems are constantly faced with decisions to make, often using a policy to pick from a set of actions based on some contextual information. We create a service that uses machine learning to accomplish this goal. The service uses exploration, logging, and online learning to create a counterfactually sound system supporting a full data lifecycle. The system is general: it works for any discrete choices, with respect to any reward metric, and can work with many learning algorithms and feature representations. The service has a simple API, and was designed to be modular and reproducible to ease deployment and debugging, respectively. We demonstrate how these properties enable learning systems that are robust and safe. Our evaluation shows that the Decision Service makes decisions in real time and incorporates new data quickly into learned policies. A large-scale deployment for a personalized news website has been handling all traffic since Jan. 2016, resulting in a 25% relative lift in clicks. By making the Decision Service externally available, we hope to make optimal decision making available to all.",
"title": ""
},
{
"docid": "4ed4bab7f0ef009ed1bb2e803c3c7833",
"text": "Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.",
"title": ""
},
{
"docid": "ba0481ae973970f96f7bf7b1a5461f16",
"text": "WEP is a protocol for securing wireless networks. In the past years, many attacks on WEP have been published, totally breaking WEP’s security. This thesis summarizes all major attacks on WEP. Additionally a new attack, the PTW attack, is introduced, which was partially developed by the author of this document. Some advanced versions of the PTW attack which are more suiteable in certain environments are described as well. Currently, the PTW attack is fastest publicly known key recovery attack against WEP protected networks.",
"title": ""
},
{
"docid": "128ea037369e69aefa90ec37ae1f9625",
"text": "The deep two-stream architecture [23] exhibited excellent performance on video based action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be real-time. This paper accelerates this architecture by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and motion vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to motion vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.",
"title": ""
},
{
"docid": "1c61be03a48c6f48f45a4f5823b6eaa4",
"text": "Large classrooms have traditionally provided multiple blackboards on which an entire lecture could be visible. In recent decades, classrooms were augmented with a data projector and screen, allowing computer-generated slides to replace hand-written blackboard presentations and overhead transparencies as the medium of choice. Many lecture halls and conference rooms will soon be equipped with multiple projectors that provide large, high-resolution displays of comparable size to an old fashioned array of blackboards. The predominant presentation software, however, is still designed for a single medium-resolution projector. With the ultimate goal of designing rich presentation tools that take full advantage of increased screen resolution and real estate, we conducted an observational study to examine current practice with both traditional whiteboards and blackboards, and computer-generated slides. We identify several categories of observed usage, and highlight differences between traditional media and computer slides. We then present design guidelines for presentation software that capture the advantages of the old and the new and describe a working prototype based on those guidelines that more fully utilizes the capabilities of multiple displays.",
"title": ""
},
{
"docid": "d38f389809b9ed973e3b92216496909c",
"text": "Bullwhip effect in the supply chain distribution network is a phenomenon that is highly avoided because it can lead to high operational costs. It drew the attention of researchers to examine ways to minimize the bullwhip effect. Bullwhip effect occurs because of incorrect company planning in pursuit of customer demand. Bullwhip effect occurs due to increased amplitude of demand variance towards upper supply chain level. If the product handled is a perishable product it will make the bullwhip effect more sensitive. The purpose of this systematic literature review is to map out some of the variables used in constructing mathematical models to minimize the bullwhip effect on food supply chains that have perishable product characteristics. The result of this systematic literature review is that the authors propose an appropriate optimization model that will be applied in the food supply chain sales on the train in Indonesian railways in the next research.",
"title": ""
},
{
"docid": "9c452434ad1c25d0fbe71138b6c39c4b",
"text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.",
"title": ""
},
{
"docid": "6c08b9488b5f5c7e4b91d2b8941a9ced",
"text": "Modern affiliate marketing networks provide an infrastructure for connecting merchants seeking customers with independent marketers (affiliates) seeking compensation. This approach depends on Web cookies to identify, at checkout time, which affiliate should receive a commission. Thus, scammers ``stuff'' their own cookies into a user's browser to divert this revenue. This paper provides a measurement-based characterization of cookie-stuffing fraud in online affiliate marketing. We use a custom-built Chrome extension, AffTracker, to identify affiliate cookies and use it to gather data from hundreds of thousands of crawled domains which we expect to be targeted by fraudulent affiliates. Overall, despite some notable historical precedents, we found cookie-stuffing fraud to be relatively scarce in our data set. Based on what fraud we detected, though, we identify which categories of merchants are most targeted and which third-party affiliate networks are most implicated in stuffing scams. We find that large affiliate networks are targeted significantly more than merchant-run affiliate programs. However, scammers use a wider range of evasive techniques to target merchant-run affiliate programs to mitigate the risk of detection suggesting that in-house affiliate programs enjoy stricter policing.",
"title": ""
}
] |
scidocsrr
|
e16a5bc0fe6f52e76e6309a17713a024
|
A lattice Boltzmann method for immiscible multiphase flow simulations using the level set method
|
[
{
"docid": "66532253c6a60d6c406717964c308879",
"text": "We present an overview of the lattice Boltzmann method (LBM), a parallel and efficient algorithm for simulating single-phase and multiphase fluid flows and for incorporating additional physical complexities. The LBM is especially useful for modeling complicated boundary conditions and multiphase interfaces. Recent extensions of this method are described, including simulations of fluid turbulence, suspension flows, and reaction diffusion systems.",
"title": ""
}
] |
[
{
"docid": "d71faafdcf1b97951e979f13dbe91cb2",
"text": "We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrasebased statistical machine translation.",
"title": ""
},
{
"docid": "8ba192226a3c3a4f52ca36587396e85c",
"text": "For many years I have been engaged in psychotherapy with individuals in distress. In recent years I have found myself increasingly concerned with the process of abstracting from that experience the general principles which appear to be involved in it. I have endeavored to discover any orderliness, any unity which seems to inhere in the subtle, complex tissue of interpersonal relationship in which I have so constantly been immersed in therapeutic work. One of the current products of this concern is an attempt to state, in formal terms, a theory of psychotherapy, of personality, and of interpersonal relationships which will encompass and contain the phenomena of my experience. What I wish to do in this paper is to take one very small segment of that theory, spell it out more completely, and explore its meaning and usefulness.",
"title": ""
},
{
"docid": "ba2c4133ac1cabe988f36ff5b381f3f9",
"text": "Developers often rely on penetration testing tools to detect vulnerabilities in web services, although frequently without really knowing their effectiveness. In fact, the lack of information on the internal state of the tested services and the complexity and variability of the responses analyzed, limits the effectiveness of such technique, highlighting the importance of evaluating and improving existing tools. The goal of this paper is to investigate if attack signatures and interface monitoring can be an effective mean to assess and improve the performance of penetration testing tools in web services environments. In practice, attacks performed by such tools are signed and the interfaces between the target application and external resources are monitored (e.g., between services and a database server), allowing gathering additional information on existing vulnerabilities. A prototype was implemented focusing on SQL injection vulnerabilities. The experimental evaluation results clearly show that the proposed approach can be used in real scenarios.",
"title": ""
},
{
"docid": "b670c8908aa2c8281b3164d7726b35d0",
"text": "We present a sketching interface for quickly and easily designing freeform models such as stuffed animals and other rotund objects. The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making wide areas fat, and narrow areas thin. Teddy, our prototype system, is implemented as a Java#8482; program, and the mesh construction is done in real-time on a standard PC. Our informal user study showed that a first-time user typically masters the operations within 10 minutes, and can construct interesting 3D models within minutes.",
"title": ""
},
{
"docid": "ae585aae554c5fbe4a18f7f2996b7e93",
"text": "UNLABELLED\nCaloric restriction occurs when athletes attempt to reduce body fat or make weight. There is evidence that protein needs increase when athletes restrict calories or have low body fat.\n\n\nPURPOSE\nThe aims of this review were to evaluate the effects of dietary protein on body composition in energy-restricted resistance-trained athletes and to provide protein recommendations for these athletes.\n\n\nMETHODS\nDatabase searches were performed from earliest record to July 2013 using the terms protein, and intake, or diet, and weight, or train, or restrict, or energy, or strength, and athlete. Studies (N = 6) needed to use adult (≥ 18 yrs), energy-restricted, resistance-trained (> 6 months) humans of lower body fat (males ≤ 23% and females ≤ 35%) performing resistance training. Protein intake, fat free mass (FFM) and body fat had to be reported.\n\n\nRESULTS\nBody fat percentage decreased (0.5-6.6%) in all study groups (N = 13) and FFM decreased (0.3-2.7kg) in nine of 13. Six groups gained, did not lose, or lost nonsignificant amounts of FFM. Five out of these six groups were among the highest in body fat, lowest in caloric restriction, or underwent novel resistance training stimuli. However, the one group that was not high in body fat that underwent substantial caloric restriction, without novel training stimuli, consumed the highest protein intake out of all the groups in this review (2.5-2.6g/kg).\n\n\nCONCLUSIONS\nProtein needs for energy-restricted resistance-trained athletes are likely 2.3-3.1g/kg of FFM scaled upwards with severity of caloric restriction and leanness.",
"title": ""
},
{
"docid": "752eea750f91318c3c45d250059cb597",
"text": "To estimate the value functions of policies from exploratory data, most model-free offpolicy algorithms rely on importance sampling, where the use of importance sampling ratios often leads to estimates with severe variance. It is thus desirable to learn off-policy without using the ratios. However, such an algorithm does not exist for multi-step learning with function approximation. In this paper, we introduce the first such algorithm based on temporal-difference (TD) learning updates. We show that an explicit use of importance sampling ratios can be eliminated by varying the amount of bootstrapping in TD updates in an action-dependent manner. Our new algorithm achieves stability using a two-timescale gradient-based TD update. A prior algorithm based on lookup table representation called Tree Backup can also be retrieved using action-dependent bootstrapping, becoming a special case of our algorithm. In two challenging off-policy tasks, we demonstrate that our algorithm is stable, effectively avoids the large variance issue, and can perform substantially better than its state-of-the-art counterpart.",
"title": ""
},
{
"docid": "0a414cd886ebf2a311d27b17c53e535f",
"text": "We consider the problem of classifying documents not by topic, but by overall sentiment. Previous approaches to sentiment classification have favored domain-specific, supervised machine learning (Naive Bayes, maximum entropy classification, and support vector machines). Inherent in these methodologies is the need for annotated training data. Building on previous work, we examine an unsupervised system of iteratively extracting positive and negative sentiment items which can be used to classify documents. Our method is completely unsupervised and only requires linguistic insight into the semantic orientation of sentiment.",
"title": ""
},
{
"docid": "6b0cfbadd815713179b2312293174379",
"text": "In order to take full advantage of the SiC devices' high-temperature and high-frequency capabilities, a transformer isolated gate driver is designed for the SiC JFET phase leg module to achieve a fast switching speed of 26V/ns and a small cross-talking voltage of 4.2V in a 650V and 5A inductive load test. Transformer isolated gate drive circuits suitable for high-temperature applications are compared with respect to different criteria. Based on the comparison, an improved edge triggered gate drive topology is proposed. Then, using the proposed gate drive topology, special issues in the phase-leg gate drive design are discussed. Several strategies are implemented to improve the phase-leg gate drive performance and alleviate the cross-talking issue. Simulation and experimental results are given for verification purposes.",
"title": ""
},
{
"docid": "337a37fab4eb5ed603dac81697be58eb",
"text": "Hazard analysis was conducted to identify critical control points (CCPs) during cocoa processing and milk chocolate manufacture and applied into a hazard analysis and critical control point (HACCP) plan. During the process, the different biological, physical and chemical hazards identified at each processing stage in the hazard analysis worksheet were incorporated into the HACCP plan to assess the risks associated with the processes. Physical hazards such as metals, stones, fibres, plastics and papers; chemical hazards such as pesticide residues, mycotoxins and heavy metals; and microbiological hazards such as Staphyloccous aureus, coliforms, Salmonella, Aspergillus and Penicillium were identified. ISO 22000 analysis was conducted for the determination of some pre-requisite programmes (PrPs) during the chocolate processing and compared with the HACCP system. The ISO 22000 Analysis worksheet reduced the CCPs for both cocoa processing and chocolate manufacture due to the elimination of the pre-requisite programmes (PrPs). Monitoring systems were established for the CCPs identified and these included preventive measures, critical limits, corrective actions, assignment of responsibilities and verification procedures. The incorporation of PrPs in the ISO 22000 made the system simple, more manageable and effective since a smaller number of CCPs were obtained.",
"title": ""
},
{
"docid": "5910bcdd2dcacb42d47194a70679edb1",
"text": "Developing effective suspicious activity detection methods has become an increasingly critical problem for governments and financial institutions in their efforts to fight money laundering. Previous anti-money laundering (AML) systems were mostly rule-based systems which suffered from low efficiency and could can be easily learned and evaded by money launders. Recently researchers have begun to use machine learning methods to solve the suspicious activity detection problem. However nearly all these methods focus on detecting suspicious activities on accounts or individual level. In this paper we propose a sequence matching based algorithm to identify suspicious sequences in transactions. Our method aims to pick out suspicious transaction sequences using two kinds of information as reference sequences: 1) individual account’s transaction history and 2) transaction information from other accounts in a peer group. By introducing the reference sequences, we can combat those who want to evade regulations by simply learning and adapting reporting criteria, and easily detect suspicious patterns. The initial results show that our approach is highly accurate.",
"title": ""
},
{
"docid": "1cecb4765c865c0f44c76f5ed2332c13",
"text": "Speaker indexing or diarization is an important task in audio processing and retrieval. Speaker diarization is the process of labeling a speech signal with labels corresponding to the identity of speakers. This paper includes a comprehensive review on the evolution of the technology and different approaches in speaker indexing and tries to offer a fully detailed discussion on these approaches and their contributions. This paper reviews the most common features for speaker diarization in addition to the most important approaches for speech activity detection (SAD) in diarization frameworks. Two main tasks of speaker indexing are speaker segmentation and speaker clustering. This paper includes a separate review on the approaches proposed for these subtasks. However, speaker diarization systems which combine the two tasks in a unified framework are also introduced in this paper. Another discussion concerns the approaches for online speaker indexing which has fundamental differences with traditional offline approaches. Other parts of this paper include an introduction on the most common performance measures and evaluation datasets. To conclude this paper, a complete framework for speaker indexing is proposed, which is aimed to be domain independent and parameter free and applicable for both online and offline applications. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2c5e8e4025572925e72e9f51db2b3d95",
"text": "This article reveals our work on refactoring plug-ins for Eclipse's C++ Development Tooling (CDT).\n With CDT a reliable open source IDE exists for C/C++ developers. Unfortunately it has been lacking of overarching refactoring support. There used to be just one single refactoring - Rename. But our plug-in provides several new refactorings which support a C++ developer in his everyday work.",
"title": ""
},
{
"docid": "601953c3d7b8986520e670c2b3778810",
"text": "With their intuitive graphical approach and expressive analysis techniques, Petri nets are suitable for a wide range of applications and teaching scenarios, and they have gained wide acceptance as a modeling technique in areas such as software design and control engineering. The core theoretical principles have been studied for many decades and there is now a comprehensive research literature that complements the extensive implementation experience.",
"title": ""
},
{
"docid": "f9692d0410cb97fd9c2ecf6f7b043b9f",
"text": "This paper develops and analyzes four energy scenarios for California that are both exploratory and quantitative. The businessas-usual scenario represents a pathway guided by outcomes and expectations emerging from California’s energy crisis. Three alternative scenarios represent contexts where clean energy plays a greater role in California’s energy system: Split Public is driven by local and individual activities; Golden State gives importance to integrated state planning; Patriotic Energy represents a national drive to increase energy independence. Future energy consumption, composition of electricity generation, energy diversity, and greenhouse gas emissions are analyzed for each scenario through 2035. Energy savings, renewable energy, and transportation activities are identified as promising opportunities for achieving alternative energy pathways in California. A combined approach that brings together individual and community activities with state and national policies leads to the largest energy savings, increases in energy diversity, and reductions in greenhouse gas emissions. Critical challenges in California’s energy pathway over the next decades identified by the scenario analysis include dominance of the transportation sector, dependence on fossil fuels, emissions of greenhouse gases, accounting for electricity imports, and diversity of the electricity sector. The paper concludes with a set of policy lessons revealed from the California energy scenarios. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "62e11fc84fd018ad9f5004c35c407878",
"text": "The transactional memory programming paradigm is gaining momentum as the approach of choice for replacing locks in concurrent programming. This paper introduces the transactional locking II (TL2) algorithm, a software transactional memory (STM) algorithm based on a combination of commit-time locking and a novel global version-clock based validation technique. TL2 improves on state-of-the-art STMs in the following ways: (1) unlike all other STMs it fits seamlessly with any systems memory life-cycle, including those using malloc/free (2) unlike all other lock-based STMs it efficiently avoids periods of unsafe execution, that is, using its novel version-clock validation, user code is guaranteed to operate only on consistent memory states, and (3) in a sequence of high performance benchmarks, while providing these new properties, it delivered overall performance comparable to (and in many cases better than) that of all former STM algorithms, both lock-based and non-blocking. Perhaps more importantly, on various benchmarks, TL2 delivers performance that is competitive with the best hand-crafted fine-grained concurrent structures. Specifically, it is ten-fold faster than a single lock. We believe these characteristics make TL2 a viable candidate for deployment of transactional memory today, long before hardware transactional support is available.",
"title": ""
},
{
"docid": "34c47fc822f728104f861abb8b44bcf3",
"text": "In recent years, the demand for high purity spinning processes has been growing in certain industry branches, such as the semiconductor, biotechnological, pharmaceutical, and chemical industry. Therefore, the cleanness specifications have been tightened, and hermetically sealed process chambers are preferred. This paper presents an advantageous solution for such an application featuring a large scale, wide air gap, and a high accelerating bearingless segment motor. Bearingless slice motors allow complete magnetic levitation in combination with a very compact and economic design. The disc-shaped rotor holds permanent magnets generating magnetic flux in the air gap. Hence, three degrees of freedom are passively stabilized by reluctance forces. Thus, only the radial rotor position and the rotor angle have to be controlled actively. The announced bearingless segment motor is a subtype of the bearingless slice motor, featuring separate independent stator elements. This leads to a reduction of stator iron, cost, and weight and, in addition, leaves space for sensors and electronics enabling a very compact system design.",
"title": ""
},
{
"docid": "9c3bf9101b93251d81d796c5595c155a",
"text": "Big Data Analytics (BDA) is one of themainstream technologies that change our perspectives on processing of information. Together with information security, BDA could be an extremely effective tool to learn more about communication and social networks.There will be infinite possibilities to find new methods of tracking cybercrimes using big data from different sources. BDA in information security also changes our thinking about security algorithms; they must change from a small data paradigm to big ones. This special issue is to analyze how the latest trends in this area help learn more about cyberspace and new threats using big data approaches. It contains seven papers and the details were listed as follows:",
"title": ""
},
{
"docid": "9c68b87f99450e85f3c0c6093429937d",
"text": "We present a method for activity recognition that first estimates the activity performer's location and uses it with input data for activity recognition. Existing approaches directly take video frames or entire video for feature extraction and recognition, and treat the classifier as a black box. Our method first locates the activities in each input video frame by generating an activity mask using a conditional generative adversarial network (cGAN). The generated mask is appended to color channels of input images and fed into a VGG-LSTM network for activity recognition. To test our system, we produced two datasets with manually created masks, one containing Olympic sports activities and the other containing trauma resuscitation activities. Our system makes activity prediction for each video frame and achieves performance comparable to the state-of-the-art systems while simultaneously outlining the location of the activity. We show how the generated masks facilitate the learning of features that are representative of the activity rather than accidental surrounding information.",
"title": ""
},
{
"docid": "a6c22ddea3d32eb1a435ea23c1cb52b0",
"text": "In this paper the letter segmentation of photographs was used, taken from a Parrot AR Drone’s camera with the aim of establishing a stimulusresponse, where the original picture formed by Red, Green and Blue (RGB) colors was segmented by color (choosing the red channel). Once the character is recognized, the Drone executes the corresponding action. Noise-free number patterns were initially used and then some pixels were added in the image in order to make a set of patterns more robust, which provided the training set for neural network and thus are able to interpolate new patterns. Edge techniques detection were used for image segmentation including Sobel filter and filters for noise removal based on the median filtering, that is a low pass filter. All this took place in a closed environment, expecting to extend this to different environments. 61 Research in Computing Science 107 (2015) pp. 61–71; rec. 2015-08-04; acc. 2015-10-19",
"title": ""
},
{
"docid": "5c0d74be236f8836017dc2c1f6de16df",
"text": "Person re-identification is the problem of recognizing people across images or videos from non-overlapping views. Although there has been much progress in person re-identification for the last decade, it still remains a challenging task because of severe appearance changes of a person due to diverse camera viewpoints and person poses. In this paper, we propose a novel framework for person reidentification by analyzing camera viewpoints and person poses, so-called Pose-aware Multi-shot Matching (PaMM), which robustly estimates target poses and efficiently conducts multi-shot matching based on the target pose information. Experimental results using public person reidentification datasets show that the proposed methods are promising for person re-identification under diverse viewpoints and pose variances.",
"title": ""
}
] |
scidocsrr
|
14243abce801c0d4bdea19e0cc1bae07
|
Bias in OLAP Queries: Detection, Explanation, and Removal
|
[
{
"docid": "8b6e2ef05f59868363beaa9b810a8d36",
"text": "Causal inference from observational data is a subject of active research and development in statistics and computer science. Many statistical software packages have been developed for this purpose. However, these toolkits do not scale to large datasets. We propose and demonstrate ZaliQL: a SQL-based framework for drawing causal inference from observational data. ZaliQL supports the state-of-the-art methods for causal inference and runs at scale within PostgreSQL database system. In addition, we built a visual interface to wrap around ZaliQL. In our demonstration, we will use this GUI to show a live investigation of the causal effect of different weather conditions on flight delays.",
"title": ""
},
{
"docid": "ec6c62f25c987446522b49840c4242d7",
"text": "Have you ever been in a sauna? If yes, according to our recent survey conducted on Amazon Mechanical Turk, people who go to saunas are more likely to know that Mike Stonebraker is not a character in “The Simpsons”. While this result clearly makes no sense, recently proposed tools to automatically suggest visualizations, correlations, or perform visual data exploration, significantly increase the chance that a user makes a false discovery like this one. In this paper, we first show how current tools mislead users to consider random fluctuations as significant discoveries. We then describe our vision and early results for QUDE, a new system for automatically controlling the various risk factors during the data exploration process.",
"title": ""
}
] |
[
{
"docid": "074fd9d0c7bd9e5f31beb77c140f61d0",
"text": "In this chapter, we examine the self and identity by considering the different conditions under which these are affected by the groups to which people belong. From a social identity perspective we argue that group commitment, on the one hand, and features of the social context, on the other hand, are crucial determinants of central identity concerns. We develop a taxonomy of situations to reflect the different concerns and motives that come into play as a result of threats to personal and group identity and degree of commitment to the group. We specify for each cell in this taxonomy how these issues of self and social identity impinge upon a broad variety of responses at the perceptual, affective, and behavioral level.",
"title": ""
},
{
"docid": "552276c35889e4cf0492b164a58e25c5",
"text": "the numbers of the botnet attacks are increasing day by day and the detection of botnet spreading in the network has become very challenging. Bots are having specific characteristics in comparison of normal malware as they are controlled by the remote master server and usually don’t show their behavior like normal malware until they don’t receive any command from their master server. Most of time bot malware are inactive, hence it is very difficult to detect. Further the detection or tracking of the network of theses bots requires an infrastructure that should be able to collect the data from a diverse range of data sources and correlate the data to bring the bigger picture in view.In this paper, we are sharing our experience of botnet detection in the private network as well as in public zone by deploying the nepenthes honeypots. The automated framework for malware collection using nepenthes and analysis using antivirus scan are discussed. The experimental results of botnet detection by enabling nepenthes honeypots in network are shown. Also we saw that existing known bots in our network can be detected.",
"title": ""
},
{
"docid": "30eb03eca06dcc006a28b5e00431d9ed",
"text": "We present for the first time a μW-power convolutional neural network for seizure detection running on a low-power microcontroller. On a dataset of 22 patients a median sensitivity of 100% is achieved. With a false positive rate of 20.7 fp/h and a short detection delay of 3.4 s it is suitable for the application in an implantable closed-loop device.",
"title": ""
},
{
"docid": "6c857ae5ce9db878c7ecd4263604874e",
"text": "In the investigations of chaos in dynamical systems a major role is played by symbolic dynamics, i.e. the description of the system by a shift on a symbol space via conjugation. We examine whether any kind of noise can strengthen the stochastic behaviour of chaotic systems dramatically and what the consequences for the symbolic description are. This leads to the introduction of random subshifts of nite type which are appropriate for the description of quite general dynamical systems evolving under the innuence of noise and showing internal stochastic features. We investigate some of the ergodic and stochastic properties of these shifts and show situations when they behave dynamically like the common shifts. In particular we want to present examples where such random shift systems appear as symbolic descriptions.",
"title": ""
},
{
"docid": "8220c4e04fd0871442564fa65938e436",
"text": "Cyber-Physical Systems (CPSs) represent systems where computations are tightly coupled with the physical world, meaning that physical data is the core component that drives computation. Industrial automation systems, wireless sensor networks, mobile robots and vehicular networks are just a sample of cyber-physical systems. Typically, CPSs have limited computation and storage capabilities due to their tiny size and being embedded into larger systems. With the emergence of cloud computing and the Internetof-Things (IoT), there are several new opportunities for these CPSs to extend their capabilities by taking advantage of the cloud resources in different ways. In this survey paper, we present an overview of research effort s on the integration of cyber-physical systems with cloud computing and categorize them into three areas: (1) remote brain, (2) big data manipulation, (3) and virtualization. In particular, we focus on three major CPSs namely mobile robots, wireless sensor networks and vehicular networks. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "eda242b58e5ed2a2736cb7cccc73220e",
"text": "This paper presents an interactive hybrid recommendation system that generates item predictions from multiple social and semantic web resources, such as Wikipedia, Facebook, and Twitter. The system employs hybrid techniques from traditional recommender system literature, in addition to a novel interactive interface which serves to explain the recommendation process and elicit preferences from the end user. We present an evaluation that compares different interactive and non-interactive hybrid strategies for computing recommendations across diverse social and semantic web APIs. Results of the study indicate that explanation and interaction with a visual representation of the hybrid system increase user satisfaction and relevance of predicted content.",
"title": ""
},
{
"docid": "ebf31b75aad0eb366959243ab8160131",
"text": "Angiogenesis, the growth of new blood vessels from pre-existing vessels, represents an excellent therapeutic target for the treatment of wound healing and cardiovascular disease. Herein, we report that LPLI (low-power laser irradiation) activates ERK/Sp1 (extracellular signal-regulated kinase/specificity protein 1) pathway to promote VEGF expression and vascular endothelial cell proliferation. We demonstrate for the first time that LPLI enhances DNA-binding and transactivation activity of Sp1 on VEGF promoter in vascular endothelial cells. Moreover, Sp1-regulated transcription is in an ERK-dependent manner. Activated ERK by LPLI translocates from cytoplasm to nuclear and leads to increasing interaction with Sp1, triggering a progressive phosphorylation of Sp1 on Thr453 and Thr739, resulting in the upregulation of VEGF expression. Furthermore, selective inhibition of Sp1 by mithramycin-A or shRNA suppresses the promotion effect of LPLI on cell cycle progression and proliferation, which is also significantly abolished by inhibition of ERK activity. These findings highlight the important roles of ERK/Sp1 pathway in angiogenesis, supplying potential strategy for angiogenesis-related diseases with LPLI treatment.",
"title": ""
},
{
"docid": "aaba5dc8efc9b6a62255139965b6f98d",
"text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.",
"title": ""
},
{
"docid": "7240d65e0bc849a569d840a461157b2c",
"text": "Deep convolutional neutral networks have achieved great success on image recognition tasks. Yet, it is non-trivial to transfer the state-of-the-art image recognition networks to videos as per-frame evaluation is too slow and unaffordable. We present deep feature flow, a fast and accurate framework for video recognition. It runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field. It achieves significant speedup as flow computation is relatively fast. The end-to-end training of the whole architecture significantly boosts the recognition accuracy. Deep feature flow is flexible and general. It is validated on two recent large scale video datasets. It makes a large step towards practical video recognition. Code would be released.",
"title": ""
},
{
"docid": "ef39209e61597136d5a954c70fcecbfe",
"text": "We introduce the Android Security Framework (ASF), a generic, extensible security framework for Android that enables the development and integration of a wide spectrum of security models in form of code-based security modules. The design of ASF reflects lessons learned from the literature on established security frameworks (such as Linux Security Modules or the BSD MAC Framework) and intertwines them with the particular requirements and challenges from the design of Android's software stack. ASF provides a novel security API that supports authors of Android security extensions in developing their modules. This overcomes the current unsatisfactory situation to provide security solutions as separate patches to the Android software stack or to embed them into Android's mainline codebase. This system security extensibility is of particular benefit for enterprise or government solutions that require deployment of advanced security models, not supported by vanilla Android. We present a prototypical implementation of ASF and demonstrate its effectiveness and efficiency by modularizing different security models from related work, such as dynamic permissions, inlined reference monitoring, and type enforcement.",
"title": ""
},
{
"docid": "18969bed489bb9fa7196634a8086449e",
"text": "A speech recognition model is proposed in which the transformation from an input speech signal into a sequence of phonemes is carried out largely through an active or feedback process. In this process, patterns are generated internally in the analyzer according to an adaptable sequence of instructions until a best match with the input signal is obtained. Details of the process are given, and the areas where further research is needed are indicated.",
"title": ""
},
{
"docid": "79844bc05388cc1436bb5388e88f6daa",
"text": "The growing number of Unmanned Aerial Vehicles (UAVs) is considerable in the last decades. Many flight test scenarios, including single and multi-vehicle formation flights, are demonstrated using different control algorithms with different test platforms. In this paper, we present a brief literature review on the development and key issues of current researches in the field of Fault-Tolerant Control (FTC) applied to UAVs. It consists of various intelligent or hierarchical control architectures for a single vehicle or a group of UAVs in order to provide potential solutions for tolerance to the faults, failures or damages in relevant to UAV components during flight. Among various UAV test-bed structures, a sample of every class of UAVs, including single-rotor, quadrotor, and fixed-wing types, are selected and briefly illustrated. Also, a short description of terms, definitions, and classifications of fault-tolerant control systems (FTCS) is presented before the main contents of review.",
"title": ""
},
{
"docid": "0bd981ea6d38817b560383f48fdfb729",
"text": "Lightweight wheelchairs are characterized by their low cost and limited range of adjustment. Our study evaluated three different folding lightweight wheelchair models using the American National Standards Institute/Rehabilitation Engineering Society of North America (ANSI/RESNA) standards to see whether quality had improved since the previous data were reported. On the basis of reports of increasing breakdown rates in the community, we hypothesized that the quality of these wheelchairs had declined. Seven of the nine wheelchairs tested failed to pass the multidrum test durability requirements. An average of 194,502 +/- 172,668 equivalent cycles was completed, which is similar to the previous test results and far below the 400,000 minimum required to pass the ANSI/RESNA requirements. This was also significantly worse than the test results for aluminum ultralight folding wheelchairs. Overall, our results uncovered some disturbing issues with these wheelchairs and suggest that manufacturers should put more effort into this category to improve quality. To improve the durability of lightweight wheelchairs, we suggested that stronger regulations be developed that require wheelchairs to be tested by independent and certified test laboratories. We also proposed a wheelchair rating system based on the National Highway Transportation Safety Administration vehicle crash ratings to assist clinicians and end users when comparing the durability of different wheelchairs.",
"title": ""
},
{
"docid": "b5a349b6d805c2b5afac86bfe22050df",
"text": "By setting apart the two functions of a support vector machine: separation of points by a nonlinear surface in the original space of patterns, and maximizing the distance between separating planes in a higher dimensional space, we are able to deene indeenite, possibly discontinuous, kernels, not necessarily inner product ones, that generate highly nonlin-ear separating surfaces. Maximizing the distance between the separating planes in the higher dimensional space is surrogated by support vector suppression, which is achieved by minimizing any desired norm of support vector multipliers. The norm may be one induced by the separation kernel if it happens to be positive deenite, or a Euclidean or a polyhe-dral norm. The latter norm leads to a linear program whereas the former norms lead to convex quadratic programs, all with an arbitrary separation kernel. A standard support vector machine can be recovered by using the same kernel for separation and support vector suppression. On a simple test example, all models perform equally well when a positive deenite kernel is used. When a negative deenite kernel is used, we are unable to solve the nonconvex quadratic program associated with a conventional support vector machine, while all other proposed models remain convex and easily generate a surface that separates all given points.",
"title": ""
},
{
"docid": "655302a1df16af206ab8341a710d9e90",
"text": "Researchers in both machine translation (e.g., Brown et al., 1990) and bilingual lexicography (e.g., Klavans and Tzoukermann, 1990) have recently become interested in studying bilingual corpora, bodies of text such as the Canadian Hansards (parliamentary proceedings) which are available in multiple languages (such as French and English). One useful step is to align the sentences, that is, to identify correspondences between sentences in one language and sentences in the other language. This paper will describe a method and a program (align) for aligning sentences based on a simple statistical model of character lengths. The program uses the fact that longer sentences in one language tend to be translated into longer sentences in the other language, and that shorter sentences tend to be translated into shorter sentences. A probabilistic score is assigned to each proposed correspondence of sentences, based on the scaled difference of lengths of the two sentences (in characters) and the variance of this difference. This probabilistic score is used in a dynamic programming framework to find the maximum likelihood alignment of sentences. It is remarkable that such a simple approach works as well as it does. An evaluation was performed based on a trilingual corpus of economic reports issued by the Union Bank of Switzerland (UBS) in English, French and German. The method correctly aligned all but 4% of the sentences. Moreover, it is possible to extract a large subcorpus which has a much smaller error rate. By selecting the best scoring 80% of the alignments, the error rate is reduced from 4% to 0.7%. There were more errors on the English-French subcorpus than on the English-German subcorpus, showing that error rates will depend on the corpus considered, however, both were small enough to hope that the method will be useful for many language pairs. To further research on bilingual corpora, a much larger sample of Canadian Hansards (approximately 90 million words, half in English and and half in French) has been aligned with the align program and will be available through the Data Collection Initiative of the Association for Computational Linguistics (ACL/DCI). In addition, in order to facilitate replication of the align program, an appendix is provided with detailed c-code of the more difficult core of the align program.",
"title": ""
},
{
"docid": "c8cb32e37aa01b712c7e6921800fbe60",
"text": "Risky families are characterized by conflict and aggression and by relationships that are cold, unsupportive, and neglectful. These family characteristics create vulnerabilities and/or interact with genetically based vulnerabilities in offspring that produce disruptions in psychosocial functioning (specifically emotion processing and social competence), disruptions in stress-responsive biological regulatory systems, including sympathetic-adrenomedullary and hypothalamic-pituitary-adrenocortical functioning, and poor health behaviors, especially substance abuse. This integrated biobehavioral profile leads to consequent accumulating risk for mental health disorders, major chronic diseases, and early mortality. We conclude that childhood family environments represent vital links for understanding mental and physical health across the life span.",
"title": ""
},
{
"docid": "b4d85eae82415b0a8dcd5e9f6eadbc6f",
"text": "We compared the effects of children’s reading of an educational electronic storybook on their emergent literacy with those of being read the same story in its printed version by an adult. We investigated 128 5to 6-year-old kindergarteners; 64 children from each of two socio-economic status (SES) groups: low (LSES) and middle (MSES). In each group, children were randomly assigned to one of three subgroups. The two intervention groups included three book reading sessions each; children in one group individually read the electronic book; in the second group, the children were read the same printed book by an adult; children in the third group, which served as a control, received the regular kindergarten programme. Preand post-intervention emergent literacy measures included vocabulary, word recognition and phonological awareness. Compared with the control group, the children’s vocabulary scores in both intervention groups improved following reading activity. Children from both interventions groups and both SES groups showed a similarly good level of story comprehension. In both SES groups, compared with the control group, children’s phonological awareness and word recognition did not improve following both reading interventions. Implications for future research and for education are discussed.",
"title": ""
},
{
"docid": "2dbc68492e54d61446dac7880db71fdd",
"text": "Supervised deep learning methods have shown promising results for the task of monocular depth estimation; but acquiring ground truth is costly, and prone to noise as well as inaccuracies. While synthetic datasets have been used to circumvent above problems, the resultant models do not generalize well to natural scenes due to the inherent domain shift. Recent adversarial approaches for domain adaption have performed well in mitigating the differences between the source and target domains. But these methods are mostly limited to a classification setup and do not scale well for fully-convolutional architectures. In this work, we propose AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation. The proposed approach is devoid of above limitations through a) adversarial learning and b) explicit imposition of content consistency on the adapted target representation. Our unsupervised approach performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting.",
"title": ""
},
{
"docid": "72839a67032eba63246dd2bdf5799f75",
"text": "We use a supervised multi-spike learning algorithm for spiking neural networks (SNNs) with temporal encoding to simulate the learning mechanism of biological neurons in which the SNN output spike trains are encoded by firing times. We first analyze why existing gradient-descent-based learning methods for SNNs have difficulty in achieving multi-spike learning. We then propose a new multi-spike learning method for SNNs based on gradient descent that solves the problems of error function construction and interference among multiple output spikes during learning. The method could be widely applied to single spiking neurons to learn desired output spike trains and to multilayer SNNs to solve classification problems. By overcoming learning interference among multiple spikes, our method has high learning accuracy when there are a relatively large number of output spikes in need of learning. We also develop an output encoding strategy with respect to multiple spikes for classification problems. This effectively improves the classification accuracy of multi-spike learning compared to that of single-spike learning.",
"title": ""
},
{
"docid": "6b3ac9bc7aa64c1d4329e4705225e369",
"text": "Apps supporting social networking in proximity are gaining momentum as they enable to both augment face-to-face interaction with a digital channel (e.g. classroom interaction systems) and augment digital interaction by providing a local real life feeling to it (e.g. nearby friends app in Facebook). Such apps effectively provide a cyber-physical space interweaving digital and face-to-face interaction. Currently such applications are mainly relying on Internet connection to the cloud, which makes them inaccessible in parts of the world with scarce Internet connection. Since many of their interactions happen locally, they could theoretically rely on Mobile Networking in Proximity (MNP), where data could be exchanged among devices without the need to rely on the availability of an Internet connection. Unfortunately, there is a lack of off-the-shelf programing support for MNP. This paper addresses this issue and presents Padoc, a middleware for social networking in proximity that provides multi-hop MNP support when cloud connection is unavailable. Furthermore the paper evaluates three MNP message diffusion strategies and presents Heya a novel classroom interaction app running on iOS devices as a proof-of-concept built on top of Padoc.",
"title": ""
}
] |
scidocsrr
|
da7675530762e5374a62d803c68b31f6
|
When Will You Arrive? Estimating Travel Time Based on Deep Neural Networks
|
[
{
"docid": "a2891655fbb08c584c6efe07ee419fb7",
"text": "Forecasting the flow of crowds is of great importance to traffic management and public safety, and very challenging as it is affected by many complex factors, such as inter-region traffic, events, and weather. We propose a deep-learning-based approach, called ST-ResNet, to collectively forecast the inflow and outflow of crowds in each and every region of a city. We design an end-to-end structure of ST-ResNet based on unique properties of spatio-temporal data. More specifically, we employ the residual neural network framework to model the temporal closeness, period, and trend properties of crowd traffic. For each property, we design a branch of residual convolutional units, each of which models the spatial properties of crowd traffic. ST-ResNet learns to dynamically aggregate the output of the three residual neural networks based on data, assigning different weights to different branches and regions. The aggregation is further combined with external factors, such as weather and day of the week, to predict the final traffic of crowds in each and every region. Experiments on two types of crowd flows in Beijing and New York City (NYC) demonstrate that the proposed ST-ResNet outperforms six well-known methods.",
"title": ""
}
] |
[
{
"docid": "de8661c2e63188464de6b345bfe3a908",
"text": "Modern computer games show potential not just for engaging and entertaining users, but also in promoting learning. Game designers employ a range of techniques to promote long-term user engagement and motivation. These techniques are increasingly being employed in so-called serious games, games that have nonentertainment purposes such as education or training. Although such games share the goal of AIED of promoting deep learner engagement with subject matter, the techniques employed are very different. Can AIED technologies complement and enhance serious game design techniques, or does good serious game design render AIED techniques superfluous? This paper explores these questions in the context of the Tactical Language Training System (TLTS), a program that supports rapid acquisition of foreign language and cultural skills. The TLTS combines game design principles and game development tools with learner modelling, pedagogical agents, and pedagogical dramas. Learners carry out missions in a simulated game world, interacting with non-player characters. A virtual aide assists the learners if they run into difficulties, and gives performance feedback in the context of preparatory exercises. Artificial intelligence plays a key role in controlling the behaviour of the non-player characters in the game; intelligent tutoring provides supplementary scaffolding.",
"title": ""
},
{
"docid": "aea24b4cbacdbe798c01996af2bf8a38",
"text": "Transient receptor potential vanilloid type 4 (TRPV4) is a calcium-permeable nonselective cation channel, originally described in 2000 by research teams led by Schultz (Nat Cell Biol 2: 695-702, 2000) and Liedtke (Cell 103: 525-535, 2000). TRPV4 is now recognized as being a polymodal ionotropic receptor that is activated by a disparate array of stimuli, ranging from hypotonicity to heat and acidic pH. Importantly, this ion channel is constitutively expressed and capable of spontaneous activity in the absence of agonist stimulation, which suggests that it serves important physiological functions, as does its widespread dissemination throughout the body and its capacity to interact with other proteins. Not surprisingly, therefore, it has emerged more recently that TRPV4 fulfills a great number of important physiological roles and that various disease states are attributable to the absence, or abnormal functioning, of this ion channel. Here, we review the known characteristics of this ion channel's structure, localization and function, including its activators, and examine its functional importance in health and disease.",
"title": ""
},
{
"docid": "f6e8c34656a40fd7b97c3e84d6ba8ebb",
"text": "We propose a novel approach to fully automatic lesion boundary detection in ultrasound breast images. The novelty of the proposed work lies in the complete automation of the manual process of initial Region-of-Interest (ROI) labeling and in the procedure adopted for the subsequent lesion boundary detection. Histogram equalization is initially used to preprocess the images followed by hybrid filtering and multifractal analysis stages. Subsequently, a single valued thresholding segmentation stage and a rule-based approach is used for the identification of the lesion ROI and the point of interest that is used as the seed-point. Next, starting from this point an Isotropic Gaussian function is applied on the inverted, original ultrasound image. The lesion area is then separated from the background by a thresholding segmentation stage and the initial boundary is detected via edge detection. Finally to further improve and refine the initial boundary, we make use of a state-of-the-art active contour method (i.e. gradient vector flow (GVF) snake model). We provide results that include judgments from expert radiologists on 360 ultrasound images proving that the final boundary detected by the proposed method is highly accurate. We compare the proposed method with two existing stateof-the-art methods, namely the radial gradient index filtering (RGI) technique of Drukker et. al. and the local mean technique proposed by Yap et. al., in proving the proposed method’s robustness and accuracy.",
"title": ""
},
{
"docid": "124a50c2e797ffe549e1591d5720acda",
"text": "Temporal information has useful features for recognizing facial expressions. However, to manually design useful features requires a lot of effort. In this paper, to reduce this effort, a deep learning technique, which is regarded as a tool to automatically extract useful features from raw data, is adopted. Our deep network is based on two different models. The first deep network extracts temporal appearance features from image sequences, while the other deep network extracts temporal geometry features from temporal facial landmark points. These two models are combined using a new integration method in order to boost the performance of the facial expression recognition. Through several experiments, we show that the two models cooperate with each other. As a result, we achieve superior performance to other state-of-the-art methods in the CK+ and Oulu-CASIA databases. Furthermore, we show that our new integration method gives more accurate results than traditional methods, such as a weighted summation and a feature concatenation method.",
"title": ""
},
{
"docid": "ed61f8946d674ed2823d2a861717775c",
"text": "This paper presents a 0.13μm SHA-less pipeline ADC with LMS calibration technique. The nonlinearity of the first three stages is calibrated with blind LMS algorithm. Opamps and switches are carefully considered and co-designed with the calibration system. Around 7LSB closed-loop nonlinearity of MDAC is achieved. Simulation shows the SNDR of the proposed ADC at 200MS/s sampling rate is 78dB with 3.13MHz input and 75dB with 83.13MHz input.",
"title": ""
},
{
"docid": "20e0c6d52a8973a8da4269705085016f",
"text": "The Great Barrier Reef (GBR) is a World Heritage Area and contains extensive areas of coral reef, seagrass meadows and fisheries resources. From adjacent catchments, numerous rivers discharge pollutants from agricultural, urban, mining and industrial activity. Pollutant sources have been identified and include suspended sediment from erosion in cattle grazing areas; nitrate from fertiliser application on crop lands; and herbicides from various land uses. The fate and effects of these pollutants in the receiving marine environment are relatively well understood. The Australian and Queensland Governments responded to the concerns of pollution of the GBR from catchment runoff with a plan to address this issue in 2003 (Reef Plan; updated 2009), incentive-based voluntary management initiatives in 2007 (Reef Rescue) and a State regulatory approach in 2009, the Reef Protection Package. This paper reviews new research relevant to the catchment to GBR continuum and evaluates the appropriateness of current management responses.",
"title": ""
},
{
"docid": "a03761cb260e132b4041d40b5a11137d",
"text": "Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQL’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.",
"title": ""
},
{
"docid": "53c5366ddb389e4b4822e5395e416380",
"text": "Information exchange in the just about any cluster of computer has to be more secure in the era of cloud computing and big data. Steganography helps to prevent illegal attention through covering the secret message in a number of digitally electronically representative media, without hurting the accessibility of secret message. Image steganography methods are recently been helpful to send any secret message in the protected image carrier to prevent threats and attacks whereas it does not give any kind of opportunity to hackers to find out the secret concept. Inside a steganographic system secrets information is embedded inside of a cover file, to ensure that no one will suspect that anything perhaps there is inside carrier. The cover file could be image, audio or video. To really make it safer, the secrets information might be encrypted embedded then, it will be decrypted in the receiver. In this paper, we are reviewing some digital image steganographic techniques depending on LSB (least significant bit) & LSB array concept.",
"title": ""
},
{
"docid": "fc63f044042826814cc761904b863967",
"text": "iv ABSTRACT Element Detection in Japanese Comic Book Panels Toshihiro Kuboi Comic books are a unique and increasingly popular form of entertainment combining visual and textual elements of communication. This work pertains to making comic books more accessible. Specifically, this paper explains how we detect elements such as speech bubbles present in Japanese comic book panels. Some applications of the work presented in this paper are automatic detection of text and its transformation into audio or into other languages. Automatic detection of elements can also allow reasoning and analysis at a deeper semantic level than what's possible today. Our approach uses an expert system and a machine learning system. The expert system process information from images and inspires feature sets which help train the machine learning system. The expert system detects speech bubbles based on heuristics. The machine learning system uses machine learning algorithms. Specifically, Naive Bayes, Maximum Entropy, and support vector machine are used to detect speech bubbles. The algorithms are trained in a fully-supervised way and a semi-supervised way. Both the expert system and the machine learning system achieved high accuracy. We are able to train the machine learning algorithms to detect speech bubbles just as accurately as the expert system. We also applied the same approach to eye detection of characters in the panels, and are able to detect majority of the eyes but with low precision. However, we are able to improve the performance of our eye detection system significantly by combining the SVM and either the Naive Bayes or the AdaBoost classifiers. v ACKNOWLEDGMENTS This thesis is inspired by Eriq Augustine's original project on the machine translation of Japanese comic books. I would like to thank Eriq for his guidance and assistance on the selection of this thesis topic.",
"title": ""
},
{
"docid": "ce294cb6467dd11f8bf61ab12783f4a2",
"text": "There is no such thing as too much of a good thing-at least when it comes to well written and comprehensive graduate level texts in any technical field that has been growing as fast as neural computing. In recent years there has been a proliferation of neural network related textbooks that ’attempt to give a broad, mathematically rigorous introduction to the theory and applications of the field for an audience of either professional engineers, graduate students, or both. Among the most notable and recent perhaps are the excellent texts by Haykin [l], Zurada [2], Kung [ 3 ] , and Hertz et. al. [4]. Add to this dozens of volumes of paper collections, and expositions from different perspectives (AI, physics, cognitive science, VLSI (very large scale integration), and parallel processing, etc.), as well as other textbooks, and you rapidly converge to the global minimum: so many books, so little time! If you are interested in learning about the underlying theory of neural computation, however, then perhaps alongside your perusal of the above texts you should also look at yet another one. Fundamentals of ArtiJcial Neural Networks emphasizes fundamental theoretical aspects of the computational capabilities and learning abilities of artificial neural networks (ANN’S). The book is intended for either first year graduate students in electrical or computer science and engineering, or practicing engineers and researchers in the field. It has evolved from a series of lecture notes of two courses on ANN’S taught by the author over the past six years at Wayne State University. Apart from the usual prerequisites of mathematical maturity (probability theory, differential equations, linear algebra, multivariate calculus), the reader is assumed to be familiar with system theory, the concept of a “state,” as well as Boolean algebra and switching theory basics. The author himself is a well-established researcher in the field, with dozens of papers to his credit. The book is well organized and presented, and a delight to read. Exercises at the end of each chapter (some 200) complement the text, and range in difficulty level from the very basic to mathematically or numerically challenging. About 700 relevant references are also provided at the end of the book. The book is centered on the idea of viewing ANN’S as nonlinear adaptive parallel computational models of varying degrees of complexity. Accordingly, the author starts out in Chapter 1 by an exposition of the computational capab es of the simplest models, namely linear and polynomial threshold gates. This is built upon basic concepts of switching theory. The discussion is then extended to the capacity and generalization ability of threshold gates via a proof of the function counting theorem. The treatment is coherent and mathematically rigorous Chapter 2 picks up from the discussion in Chapter 1 by considering networks of linear threshold gates (LTG’s) as well as neuronal units with nonlinear activation functions, and investigates their mapping capabilities. Important theoretical results on bounds on the number of functions realizable by a feedforward network of LTG’s, as",
"title": ""
},
{
"docid": "a76be3ebe7b169f3669243271d2474a6",
"text": "Sophisticated video processing effects require both image and geometry information. We explore the possibility to augment a video camera with a recent infrared time-of-flight depth camera, to capture high-resolution RGB and low-resolution, noisy depth at video frame rates. To turn such a setup into a practical RGBZ video camera, we develop efficient data filtering techniques that are tailored to the noise characteristics of IR depth cameras. We first remove typical artefacts in the RGBZ data and then apply an efficient spatiotemporal denoising and upsampling scheme. This allows us to record temporally coherent RGBZ videos at interactive frame rates and to use them to render a variety of effects in unprecedented quality. We show effects such as video relighting, geometry-based abstraction and stylisation, background segmentation and rendering in stereoscopic 3D.",
"title": ""
},
{
"docid": "a8ddaed8209d09998159014307233874",
"text": "Traditional image-based 3D reconstruction methods use multiple images to extract 3D geometry. However, it is not always possible to obtain such images, for example when reconstructing destroyed structures using existing photographs or paintings with proper perspective (figure 1), and reconstructing objects without actually visiting the site using images from the web or postcards (figure 2). Even when multiple images are possible, parts of the scene appear in only one image due to occlusions and/or lack of features to match between images. Methods for 3D reconstruction from a single image do exist (e.g. [1] and [2]). We present a new method that is more accurate and more flexible so that it can model a wider variety of sites and structures than existing methods. Using this approach, we reconstructed in 3D many destroyed structures using old photographs and paintings. Sites all over the world have been reconstructed from tourist pictures, web pages, and postcards.",
"title": ""
},
{
"docid": "6025fb8936761dcf3c6751545b430ec0",
"text": "Although many sentiment lexicons in different languages exist, most are not comprehensive. In a recent sentiment analysis application, we used a large Chinese sentiment lexicon and found that it missed a large number of sentiment words used in social media. This prompted us to make a new attempt to study sentiment lexicon expansion. This paper first formulates the problem as a PU learning problem. It then proposes a new PU learning method suitable for the problem based on a neural network. The results are further enhanced with a new dictionary lookup technique and a novel polarity classification algorithm. Experimental results show that the proposed approach greatly outperforms baseline methods.",
"title": ""
},
{
"docid": "081b09442d347a4a29d8cc3978079f79",
"text": "The major challenge in designing wireless sensor networks (WSNs) is the support of the functional, such as data latency, and the non-functional, such as data integrity, requirements while coping with the computation, energy and communication constraints. Careful node placement can be a very effective optimization means for achieving the desired design goals. In this paper, we report on the current state of the research on optimized node placement in WSNs. We highlight the issues, identify the various objectives and enumerate the different models and formulations. We categorize the placement strategies into static and dynamic depending on whether the optimization is performed at the time of deployment or while the network is operational, respectively. We further classify the published techniques based on the role that the node plays in the network and the primary performance objective considered. The paper also highlights open problems in this area of research.",
"title": ""
},
{
"docid": "8bc0edddcfac4aabb7fcf0fe4ed8035b",
"text": "Nowadays, there are many taxis traversing around the city searching for available passengers, but their hunts of passengers are not always efficient. To the dynamics of traffic and biased passenger distributions, current offline recommendations based on place of interests may not work well. In this paper, we define a new problem, global-optimal trajectory retrieving (GOTR), as finding a connected trajectory of high profit and high probability to pick up a passenger within a given time period in real-time. To tackle this challenging problem, we present a system, called HUNTS, based on the knowledge from both historical and online GPS data and business data. To achieve above objectives, first, we propose a dynamic scoring system to evaluate each road segment in different time periods by considering both picking-up rate and profit factors. Second, we introduce a novel method, called trajectory sewing, based on a heuristic method and the Skyline technique, to produce an approximate optimal trajectory in real-time. Our method produces a connected trajectory rather than several place of interests to avoid frequent next-hop queries. Third, to avoid congestion and other real-time traffic situations, we update the score of each road segment constantly via an online handler. Finally, we validate our system using a large-scale data of around 15,000 taxis in a large city in China, and compare the results with regular taxis' hunts and the state-of-the-art.",
"title": ""
},
{
"docid": "980ad058a2856048765f497683557386",
"text": "Hierarchical reinforcement learning (HRL) has recently shown promising advances on speeding up learning, improving the exploration, and discovering intertask transferable skills. Most recent works focus on HRL with two levels, i.e., a master policy manipulates subpolicies, which in turn manipulate primitive actions. However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive. Therefore, in this paper, we propose a diversity-driven extensible HRL (DEHRL), where an extensible and scalable framework is built and learned levelwise to realize HRL with multiple levels. DEHRL follows a popular assumption: diverse subpolicies are useful, i.e., subpolicies are believed to be more useful if they are more diverse. However, existing implementations of this diversity assumption usually have their own drawbacks, which makes them inapplicable to HRL with multiple levels. Consequently, we further propose a novel diversity-driven solution to achieve this assumption in DEHRL. Experimental studies evaluate DEHRL with five baselines from four perspectives in two domains; the results show that DEHRL outperforms the state-of-the-art baselines in all four aspects.",
"title": ""
},
{
"docid": "16915e2da37f8cd6fa1ce3a4506223ff",
"text": "In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.",
"title": ""
}
] |
scidocsrr
|
3b1f2e12d3b4042889fc551daf6e9cf7
|
Template Based Inference in Symmetric Relational Markov Random Fields
|
[
{
"docid": "9707365fac6490f52b328c2b039915b6",
"text": "Identification of protein–protein interactions often provides insight into protein function, and many cellular processes are performed by stable protein complexes. We used tandem affinity purification to process 4,562 different tagged proteins of the yeast Saccharomyces cerevisiae. Each preparation was analysed by both matrix-assisted laser desorption/ionization–time of flight mass spectrometry and liquid chromatography tandem mass spectrometry to increase coverage and accuracy. Machine learning was used to integrate the mass spectrometry scores and assign probabilities to the protein–protein interactions. Among 4,087 different proteins identified with high confidence by mass spectrometry from 2,357 successful purifications, our core data set (median precision of 0.69) comprises 7,123 protein–protein interactions involving 2,708 proteins. A Markov clustering algorithm organized these interactions into 547 protein complexes averaging 4.9 subunits per complex, about half of them absent from the MIPS database, as well as 429 additional interactions between pairs of complexes. The data (all of which are available online) will help future studies on individual proteins as well as functional genomics and systems biology.",
"title": ""
},
{
"docid": "8dc493568e94d94370f78e663da7df96",
"text": "Expertise in C++, C, Perl, Haskell, Linux system administration. Technical experience in compiler design and implementation, release engineering, network administration, FPGAs, hardware design, probabilistic inference, machine learning, web search engines, cryptography, datamining, databases (SQL, Oracle, PL/SQL, XML), distributed knowledge bases, machine vision, automated web content generation, 2D and 3D graphics, distributed computing, scientific and numerical computing, optimization, virtualization (Xen, VirtualBox). Also experience in risk analysis, finance, game theory, firm behavior, international economics. Familiar with Java, C++ Standard Template Library, Java Native Interface, Java Foundation Classes, Android development, MATLAB, CPLEX, NetPBM, Cascading Style Sheets (CSS), Tcl/Tk, Windows system administration, Mac OS X system administration, ElasticSearch, modifying the Ubuntu installer.",
"title": ""
}
] |
[
{
"docid": "907888b819c7f65fe34fb8eea6df9c93",
"text": "Most time-series datasets with multiple data streams have (many) missing measurements that need to be estimated. Most existing methods address this estimation problem either by interpolating within data streams or imputing across data streams; we develop a novel approach that does both. Our approach is based on a deep learning architecture that we call a Multidirectional Recurrent Neural Network (M-RNN). An M-RNN differs from a bi-directional RNN in that it operates across streams in addition to within streams, and because the timing of inputs into the hidden layers is both lagged and advanced. To demonstrate the power of our approach we apply it to a familiar real-world medical dataset and demonstrate significantly improved performance.",
"title": ""
},
{
"docid": "1e18be7d7e121aa899c96cbcf5ea906b",
"text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1",
"title": ""
},
{
"docid": "26e60be4012b20575f3ddee16f046daa",
"text": "Natural scene character recognition is challenging due to the cluttered background, which is hard to separate from text. In this paper, we propose a novel method for robust scene character recognition. Specifically, we first use robust principal component analysis (PCA) to denoise character image by recovering the missing low-rank component and filtering out the sparse noise term, and then use a simple Histogram of oriented Gradient (HOG) to perform image feature extraction, and finally, use a sparse representation based classifier for recognition. In experiments on four public datasets, namely the Char74K dataset, ICADAR 2003 robust reading dataset, Street View Text (SVT) dataset and IIIT5K-word dataset, our method was demonstrated to be competitive with the state-of-the-art methods.",
"title": ""
},
{
"docid": "d8b19c953cc66b6157b87da402dea98a",
"text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.",
"title": ""
},
{
"docid": "645d9a7186080d4ec3c7ce708b1c9818",
"text": "With millions of users and billions of photos, web-scale face recognition is a challenging task that demands speed, accuracy, and scalability. Most current approaches do not address and do not scale well to Internet-sized scenarios such as tagging friends or finding celebrities. Focusing on web-scale face identification, we gather an 800,000 face dataset from the Facebook social network that models real-world situations where specific faces must be recognized and unknown identities rejected. We propose a novel Linearly Approximated Sparse Representation-based Classification (LASRC) algorithm that uses linear regression to perform sample selection for ‘-minimization, thus harnessing the speed of least-squares and the robustness of sparse solutions such as SRC. Our efficient LASRC algorithm achieves comparable performance to SRC with a 100–250 times speedup and exhibits similar recall to SVMs with much faster training. Extensive tests demonstrate our proposed approach is competitive on pair-matching verification tasks and outperforms current state-of-the-art algorithms on open-universe identification in uncontrolled, web-scale scenarios. 2013 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "362cf1594043c92f118876f959e078a4",
"text": "Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation (Krizhevsky et al., 2012) alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance few-shot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGG-Face data. In our experiments we can see over 13% increase in accuracy in the low-data regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9% to 76%) and VGG-Face (4.5% to 12%); in Matching Networks for Omniglot we observe an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% in EMNIST (from 59.5% to 61.3%).",
"title": ""
},
{
"docid": "bd41083b19e2d542b3835c3a008b30e6",
"text": "Formalizations are used in systems development to support the description of artifacts and to shape and regulate developer behavior. The limits to applying formalizations in these two ways are discussed based on examples from systems development practice. It is argued that formalizations, for example in the form of methods, are valuable in some situations, but inappropriate in others. The alternative to uncritically using formalizations is that systems developers reflect on the situations in which they find themselves and manage based on a combination of formal and informal approaches.",
"title": ""
},
{
"docid": "ce41d07b369635c5b0a914d336971f8e",
"text": "In this paper, a fuzzy controller for an inverted pendulum system is presented in two stages. These stages are: investigation of fuzzy control system modeling methods and solution of the “Inverted Pendulum Problem” by using Java programming with Applets for internet based control education. In the first stage, fuzzy modeling and fuzzy control system investigation, Java programming language, classes and multithreading were introduced. In the second stage specifically, simulation of the inverted pendulum problem was developed with Java Applets and the simulation results were given. Also some stability concepts are introduced. c © 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7aa1df89f94fe1f653f1680fbf33e838",
"text": "Several modes of vaccine delivery have been developed in the last 25 years, which induce strong immune responses in pre-clinical models and in human clinical trials. Some modes of delivery include, adjuvants (aluminum hydroxide, Ribi formulation, QS21), liposomes, nanoparticles, virus like particles, immunostimulatory complexes (ISCOMs), dendrimers, viral vectors, DNA delivery via gene gun, electroporation or Biojector 2000, cell penetrating peptides, dendritic cell receptor targeting, toll-like receptors, chemokine receptors and bacterial toxins. There is an enormous amount of information and vaccine delivery methods available for guiding vaccine and immunotherapeutics development against diseases.",
"title": ""
},
{
"docid": "228c59c9bf7b4b2741567bffb3fcf73f",
"text": "This paper presents a new PSO-based optimization DBSCAN space clustering algorithm with obstacle constraints. The algorithm introduces obstacle model and simplifies two-dimensional coordinates of the cluster object coding to one-dimensional, then uses the PSO algorithm to obtain the shortest path and minimum obstacle distance. At the last stage, this paper fulfills spatial clustering based on obstacle distance. Theoretical analysis and experimental results show that the algorithm can get high-quality clustering result of space constraints with more reasonable and accurate quality.",
"title": ""
},
{
"docid": "0250d6bb0bcf11ca8af6c2661c1f7f57",
"text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.",
"title": ""
},
{
"docid": "19ebb5c0cdf90bf5aef36ad4b9f621a1",
"text": "There has been a dramatic increase in the number and complexity of new ventilation modes over the last 30 years. The impetus for this has been the desire to improve the safety, efficiency, and synchrony of ventilator-patient interaction. Unfortunately, the proliferation of names for ventilation modes has made understanding mode capabilities problematic. New modes are generally based on increasingly sophisticated closed-loop control systems or targeting schemes. We describe the 6 basic targeting schemes used in commercially available ventilators today: set-point, dual, servo, adaptive, optimal, and intelligent. These control systems are designed to serve the 3 primary goals of mechanical ventilation: safety, comfort, and liberation. The basic operations of these schemes may be understood by clinicians without any engineering background, and they provide the basis for understanding the wide variety of ventilation modes and their relative advantages for improving patient-ventilator synchrony. Conversely, their descriptions may provide engineers with a means to better communicate to end users.",
"title": ""
},
{
"docid": "8588a3317d4b594d8e19cb005c3d35c7",
"text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.",
"title": ""
},
{
"docid": "932088f443c5f0f3e239ed13032e56d7",
"text": "Hydro Muscles are linear actuators resembling ordinary biological muscles in terms of active dynamic output, passive material properties and appearance. The passive and dynamic characteristics of the latex based Hydro Muscle are addressed. The control tests of modular muscles are presented together with a muscle model relating sensed quantities with net force. Hydro Muscles are discussed in the context of conventional actuators. The hypothesis that Hydro Muscles have greater efficiency than McKibben Muscles is experimentally verified. Hydro Muscle peak efficiency with (without) back flow consideration was 88% (27%). Possible uses of Hydro Muscles are illustrated by relevant robotics projects at WPI. It is proposed that Hydro Muscles can also be an excellent educational tool for moderate-budget robotics classrooms and labs; the muscles are inexpensive (in the order of standard latex tubes of comparable size), made of off-the-shelf elements in less than 10 minutes, easily customizable, lightweight, biologically inspired, efficient, compliant soft linear actuators that are adept for power-augmentation. Moreover, a single source can actuate many muscles by utilizing control of flow and/or pressure. Still further, these muscles can utilize ordinary tap water and successfully operate within a safe range of pressures not overly exceeding standard water household pressure of about 0.59 MPa (85 psi).",
"title": ""
},
{
"docid": "d9710b9a214d95c572bdc34e1fe439c4",
"text": "This paper presents a new method, capable of automatically generating attacks on binary programs from software crashes. We analyze software crashes with a symbolic failure model by performing concolic executions following the failure directed paths, using a whole system environment model and concrete address mapped symbolic memory in S2 E. We propose a new selective symbolic input method and lazy evaluation on pseudo symbolic variables to handle symbolic pointers and speed up the process. This is an end-to-end approach able to create exploits from crash inputs or existing exploits for various applications, including most of the existing benchmark programs, and several large scale applications, such as a word processor (Microsoft office word), a media player (mpalyer), an archiver (unrar), or a pdf reader (foxit). We can deal with vulnerability types including stack and heap overflows, format string, and the use of uninitialized variables. Notably, these applications have become software fuzz testing targets, but still require a manual process with security knowledge to produce mitigation-hardened exploits. Using this method to generate exploits is an automated process for software failures without source code. The proposed method is simpler, more general, faster, and can be scaled to larger programs than existing systems. We produce the exploits within one minute for most of the benchmark programs, including mplayer. We also transform existing exploits of Microsoft office word into new exploits within four minutes. The best speedup is 7,211 times faster than the initial attempt. For heap overflow vulnerability, we can automatically exploit the unlink() macro of glibc, which formerly requires sophisticated hacking efforts.",
"title": ""
},
{
"docid": "665da3a85a548d12864de5fad517e3ee",
"text": "To characterize the neural correlates of being personally involved in social interaction as opposed to being a passive observer of social interaction between others we performed an fMRI study in which participants were gazed at by virtual characters (ME) or observed them looking at someone else (OTHER). In dynamic animations virtual characters then showed socially relevant facial expressions as they would appear in greeting and approach situations (SOC) or arbitrary facial movements (ARB). Differential neural activity associated with ME>OTHER was located in anterior medial prefrontal cortex in contrast to the precuneus for OTHER>ME. Perception of socially relevant facial expressions (SOC>ARB) led to differentially increased neural activity in ventral medial prefrontal cortex. Perception of arbitrary facial movements (ARB>SOC) differentially activated the middle temporal gyrus. The results, thus, show that activation of medial prefrontal cortex underlies both the perception of social communication indicated by facial expressions and the feeling of personal involvement indicated by eye gaze. Our data also demonstrate that distinct regions of medial prefrontal cortex contribute differentially to social cognition: whereas the ventral medial prefrontal cortex is recruited during the analysis of social content as accessible in interactionally relevant mimic gestures, differential activation of a more dorsal part of medial prefrontal cortex subserves the detection of self-relevance and may thus establish an intersubjective context in which communicative signals are evaluated.",
"title": ""
},
{
"docid": "f0f2cdccd8f415cbd3fffcea4509562a",
"text": "Textual inference is an important component in many applications for understanding natural language. Classical approaches to textual inference rely on logical representations for meaning, which may be regarded as “external” to the natural language itself. However, practical applications usually adopt shallower lexical or lexical-syntactic representations, which correspond closely to language structure. In many cases, such approaches lack a principled meaning representation and inference framework. We describe an inference formalism that operates directly on language-based structures, particularly syntactic parse trees. New trees are generated by applying inference rules, which provide a unified representation for varying types of inferences. We use manual and automatic methods to generate these rules, which cover generic linguistic structures as well as specific lexical-based inferences. We also present a novel packed data-structure and a corresponding inference algorithm that allows efficient implementation of this formalism. We proved the correctness of the new algorithm and established its efficiency analytically and empirically. The utility of our approach was illustrated on two tasks: unsupervised relation extraction from a large corpus, and the Recognizing Textual Entailment (RTE) benchmarks.",
"title": ""
},
{
"docid": "a5214112059506a67f031d98a4e6f04f",
"text": "Accurate segmentation of cervical cells in Pap smear images is an important task for automatic identification of pre-cancerous changes in the uterine cervix. One of the major segmentation challenges is the overlapping of cytoplasm, which was less addressed by previous studies. In this paper, we propose a learning-based method to tackle the overlapping issue with robust shape priors by segmenting individual cell in Pap smear images. Specifically, we first define the problem as a discrete labeling task for multiple cells with a suitable cost function. We then use the coarse labeling result to initialize our dynamic multiple-template deformation model for further boundary refinement on each cell. Multiple-scale deep convolutional networks are adopted to learn the diverse cell appearance features. Also, we incorporate high level shape information to guide segmentation where the cells boundary is noisy or lost due to touching and overlapping cells. We evaluate the proposed algorithm on two different datasets, and our comparative experiments demonstrate the promising performance of the proposed method in terms of segmentation accuracy.",
"title": ""
},
{
"docid": "83f44152fe9103a8027b602de7360270",
"text": "The BATS project focuses on helping students with visual impairments access and explore spatial information using standard computer hardware and open source software. Our work is largely based on prior techniques used in presenting maps to the blind such as text-to-speech synthesis, auditory icons, and tactile feedback. We add spatial sound to position auditory icons and speech callouts in three dimensions, and use consumer-grade haptic feedback devices to provide additional map information through tactile vibrations and textures. Two prototypes have been developed for use in educational settings and have undergone minimal assessment. A system for public release and plans for more rigorous evaluation are in development.",
"title": ""
},
{
"docid": "4bc7687ba89699a537329f37dda4e74d",
"text": "At the same time as cities are growing, their share of older residents is increasing. To engage and assist cities to become more “age-friendly,” the World Health Organization (WHO) prepared the Global Age-Friendly Cities Guide and a companion “Checklist of Essential Features of Age-Friendly Cities”. In collaboration with partners in 35 cities from developed and developing countries, WHO determined the features of age-friendly cities in eight domains of urban life: outdoor spaces and buildings; transportation; housing; social participation; respect and social inclusion; civic participation and employment; communication and information; and community support and health services. In 33 cities, partners conducted 158 focus groups with persons aged 60 years and older from lower- and middle-income areas of a locally defined geographic area (n = 1,485). Additional focus groups were held in most sites with caregivers of older persons (n = 250 caregivers) and with service providers from the public, voluntary, and commercial sectors (n = 515). No systematic differences in focus group themes were noted between cities in developed and developing countries, although the positive, age-friendly features were more numerous in cities in developed countries. Physical accessibility, service proximity, security, affordability, and inclusiveness were important characteristics everywhere. Based on the recurring issues, a set of core features of an age-friendly city was identified. The Global Age-Friendly Cities Guide and companion “Checklist of Essential Features of Age-Friendly Cities” released by WHO serve as reference for other communities to assess their age readiness and plan change.",
"title": ""
}
] |
scidocsrr
|
89614a6ddc0d9dedd24685c5b6a1164b
|
Short-term load forecasting in smart grid: A combined CNN and K-means clustering approach
|
[
{
"docid": "f9b56de3658ef90b611c78bdb787d85b",
"text": "Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources.",
"title": ""
},
{
"docid": "0254d49cb759e163a032b6557f969bd3",
"text": "The smart electricity grid enables a two-way flow of power and data between suppliers and consumers in order to facilitate the power flow optimization in terms of economic efficiency, reliability and sustainability. This infrastructure permits the consumers and the micro-energy producers to take a more active role in the electricity market and the dynamic energy management (DEM). The most important challenge in a smart grid (SG) is how to take advantage of the users’ participation in order to reduce the cost of power. However, effective DEM depends critically on load and renewable production forecasting. This calls for intelligent methods and solutions for the real-time exploitation of the large volumes of data generated by a vast amount of smart meters. Hence, robust data analytics, high performance computing, efficient data network management, and cloud computing techniques are critical towards the optimized operation of SGs. This research aims to highlight the big data issues and challenges faced by the DEM employed in SG networks. It also provides a brief description of the most commonly used data processing methods in the literature, and proposes a promising direction for future research in the field.",
"title": ""
},
{
"docid": "26032527ca18ef5a8cdeff7988c6389c",
"text": "This paper aims to develop a load forecasting method for short-term load forecasting, based on an adaptive two-stage hybrid network with self-organized map (SOM) and support vector machine (SVM). In the first stage, a SOM network is applied to cluster the input data set into several subsets in an unsupervised manner. Then, groups of 24 SVMs for the next day's load profile are used to fit the training data of each subset in the second stage in a supervised way. The proposed structure is robust with different data types and can deal well with the nonstationarity of load series. In particular, our method has the ability to adapt to different models automatically for the regular days and anomalous days at the same time. With the trained network, we can straightforwardly predict the next-day hourly electricity load. To confirm the effectiveness, the proposed model has been trained and tested on the data of the historical energy load from New York Independent System Operator.",
"title": ""
}
] |
[
{
"docid": "1232e633a941b7aa8cccb28287b56e5b",
"text": "This paper presents a complete system for constructing panoramic image mosaics from sequences of images. Our mosaic representation associates a transformation matrix with each input image, rather than explicitly projecting all of the images onto a common surface (e.g., a cylinder). In particular, to construct a full view panorama, we introduce a rotational mosaic representation that associates a rotation matrix (and optionally a focal length) with each input image. A patch-based alignment algorithm is developed to quickly align two images given motion models. Techniques for estimating and refining camera focal lengths are also presented. In order to reduce accumulated registration errors, we apply global alignment (block adjustment) to the whole sequence of images, which results in an optimally registered image mosaic. To compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions, we use a local alignment (deghosting) technique which warps each image based on the results of pairwise local image registrations. By combining both global and local alignment, we significantly improve the quality of our image mosaics, thereby enabling the creation of full view panoramic mosaics with hand-held cameras. We also present an inverse texture mapping algorithm for efficiently extracting environment maps from our panoramic image mosaics. By mapping the mosaic onto an arbitrary texture-mapped polyhedron surrounding the origin, we can explore the virtual environment using standard 3D graphics viewers and hardware without requiring special-purpose players.",
"title": ""
},
{
"docid": "8ee0764d45e512bfc6b0273f7e90d2c1",
"text": "This work introduces a new dataset and framework for the exploration of topological data analysis (TDA) techniques applied to time-series data. We examine the end-toend TDA processing pipeline for persistent homology applied to time-delay embeddings of time series – embeddings that capture the underlying system dynamics from which time series data is acquired. In particular, we consider stability with respect to time series length, the approximation accuracy of sparse filtration methods, and the discriminating ability of persistence diagrams as a feature for learning. We explore these properties across a wide range of time-series datasets spanning multiple domains for single source multi-segment signals as well as multi-source single segment signals. Our analysis and dataset captures the entire TDA processing pipeline and includes time-delay embeddings, persistence diagrams, topological distance measures, as well as kernels for similarity learning and classification tasks for a broad set of time-series data sources. We outline the TDA framework and rationale behind the dataset and provide insights into the role of TDA for time-series analysis as well as opportunities for new work.",
"title": ""
},
{
"docid": "75f8f0d89bdb5067910a92553275b0d7",
"text": "It is well known that recognition performance degrades signi cantly when moving from a speakerdependent to a speaker-independent system. Traditional hidden Markov model (HMM) systems have successfully applied speaker-adaptation approaches to reduce this degradation. In this paper we present and evaluate some techniques for speaker-adaptation of a hybrid HMM-arti cial neural network (ANN) continuous speech recognition system. These techniques are applied to a well trained, speaker-independent, hybrid HMM-ANN system and the recognizer parameters are adapted to a new speaker through o -line procedures. The techniques are evaluated on the DARPA RM corpus using varying amounts of adaptation material and different ANN architectures. The results show that speaker-adaptation within the hybrid framework can substantially improve system performance.",
"title": ""
},
{
"docid": "cfeb97a848766269c2088d8191206cc8",
"text": "We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.",
"title": ""
},
{
"docid": "c67ffe3dfa6f0fe0449f13f1feb20300",
"text": "The associations between giving a history of physical, emotional, and sexual abuse in children and a range of mental health, interpersonal, and sexual problems in adult life were examined in a community sample of women. Abuse was defined to establish groups giving histories of unequivocal victimization. A history of any form of abuse was associated with increased rates of psychopathology, sexual difficulties, decreased self-esteem, and interpersonal problems. The similarities between the three forms of abuse in terms of their association with negative adult outcomes was more apparent than any differences, though there was a trend for sexual abuse to be particularly associated to sexual problems, emotional abuse to low self-esteem, and physical abuse to marital breakdown. Abuse of all types was more frequent in those from disturbed and disrupted family backgrounds. The background factors associated with reports of abuse were themselves often associated to the same range of negative adult outcomes as for abuse. Logistic regressions indicated that some, though not all, of the apparent associations between abuse and adult problems was accounted for by this matrix of childhood disadvantage from which abuse so often emerged.",
"title": ""
},
{
"docid": "a5e01cfeb798d091dd3f2af1a738885b",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "a37aae87354ff25bf7937adc7a9f8e62",
"text": "Vectorizing hand-drawn sketches is an important but challenging task. Many businesses rely on fashion, mechanical or structural designs which, sooner or later, need to be converted in vectorial form. For most, this is still a task done manually. This paper proposes a complete framework that automatically transforms noisy and complex hand-drawn sketches with different stroke types in a precise, reliable and highly-simplified vectorized model. The proposed framework includes a novel line extraction algorithm based on a multi-resolution application of Pearson’s cross correlation and a new unbiased thinning algorithm that can get rid of scribbles and variable-width strokes to obtain clean 1-pixel lines. Other contributions include variants of pruning, merging and edge linking procedures to post-process the obtained paths. Finally, a modification of the original Schneider’s vectorization algorithm is designed to obtain fewer control points in the resulting Bézier splines. All the steps presented in this framework have been extensively tested and compared with state-of-the-art algorithms, showing (both qualitatively and quantitatively) their outperformance. Moreover they exhibit fast real-time performance, making them suitable for integration in any computer graphics toolset.",
"title": ""
},
{
"docid": "b17e909f1301880e93797ed75d26ce57",
"text": "We propose a simple, yet effective, Word Sense Disambiguation method that uses a combination of a lexical knowledge-base and embeddings. Similar to the classic Lesk algorithm, it exploits the idea that overlap between the context of a word and the definition of its senses provides information on its meaning. Instead of counting the number of words that overlap, we use embeddings to compute the similarity between the gloss of a sense and the context. Evaluation on both Dutch and English datasets shows that our method outperforms other Lesk methods and improves upon a state-of-theart knowledge-based system. Additional experiments confirm the effect of the use of glosses and indicate that our approach works well in different domains.",
"title": ""
},
{
"docid": "c1d75b9a71f373a6e44526adf3694f37",
"text": "Segmentation means segregating area of interest from the image. The aim of image segmentation is to cluster the pixels into salient image regions i.e. regions corresponding to individual surfaces, objects, or natural parts of objects. Automatic Brain tumour segmentation is a sensitive step in medical field. A significant medical informatics task is to perform the indexing of the patient databases according to image location, size and other characteristics of brain tumours based on magnetic resonance (MR) imagery. This requires segmenting tumours from different MR imaging modalities. Automated brain tumour segmentation from MR modalities is a challenging, computationally intensive task.Image segmentation plays an important role in image processing. MRI is generally more useful for brain tumour detection because it provides more detailed information about its type, position and size. For this reason, MRI imaging is the choice of study for the diagnostic purpose and, thereafter, for surgery and monitoring treatment outcomes. This paper presents a review of the various methods used in brain MRI image segmentation. The review covers imaging modalities, magnetic resonance imaging and methods for segmentation approaches. The paper concludes with a discussion on the upcoming trend of advanced researches in brain image segmentation. Keywords-Region growing, Level set method, Split and merge algorithm, MRI images",
"title": ""
},
{
"docid": "51c0d682dd0d9c24e23696ba09dc4f49",
"text": "Graph embedding methods represent nodes in a continuous vector space, preserving information from the graph (e.g. by sampling random walks). There are many hyper-parameters to these methods (such as random walk length) which have to be manually tuned for every graph. In this paper, we replace random walk hyperparameters with trainable parameters that we automatically learn via backpropagation. In particular, we learn a novel attention model on the power series of the transition matrix, which guides the random walk to optimize an upstream objective. Unlike previous approaches to attention models, the method that we propose utilizes attention parameters exclusively on the data (e.g. on the random walk), and not used by the model for inference. We experiment on link prediction tasks, as we aim to produce embeddings that best-preserve the graph structure, generalizing to unseen information. We improve state-of-the-art on a comprehensive suite of real world datasets including social, collaboration, and biological networks. Adding attention to random walks can reduce the error by 20% to 45% on datasets we attempted. Further, our learned attention parameters are different for every graph, and our automatically-found values agree with the optimal choice of hyper-parameter if we manually tune existing methods.",
"title": ""
},
{
"docid": "e45e49fb299659e2e71f5c4eb825aff6",
"text": "We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledgebase. Knowledge is transferred by learning reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks, are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using two techniques: (1) a deep skill array and (2) skill distillation, our novel variation of policy distillation (Rusu et al. 2015) for learning skills. Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et al. 2015) in sub-domains of Minecraft.",
"title": ""
},
{
"docid": "19a02cb59a50f247663acc77b768d7ec",
"text": "Machine learning is a useful technology for decision support systems and assumes greater importance in research and practice. Whilst much of the work focuses technical implementations and the adaption of machine learning algorithms to application domains, the factors of machine learning design affecting the usefulness of decision support are still understudied. To enhance the understanding of machine learning and its use in decision support systems, we report the results of our content analysis of design-oriented research published between 1994 and 2013 in major Information Systems outlets. The findings suggest that the usefulness of machine learning for supporting decision-makers is dependent on the task, the phase of decision-making, and the applied technologies. We also report about the advantages and limitations of prior research, the applied evaluation methods and implications for future decision support research. Our findings suggest that future decision support research should shed more light on organizational and people-related evaluation criteria.",
"title": ""
},
{
"docid": "a90dd405d9bd2ed912cacee098c0f9db",
"text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.",
"title": ""
},
{
"docid": "9b53d96025c26254b38a4325c9d2da15",
"text": "The parameter spaces of hierarchical systems such as multilayer perceptrons include singularities due to the symmetry and degeneration of hidden units. A parameter space forms a geometrical manifold, called the neuromanifold in the case of neural networks. Such a model is identified with a statistical model, and a Riemannian metric is given by the Fisher information matrix. However, the matrix degenerates at singularities. Such a singular structure is ubiquitous not only in multilayer perceptrons but also in the gaussian mixture probability densities, ARMA time-series model, and many other cases. The standard statistical paradigm of the Cramr-Rao theorem does not hold, and the singularity gives rise to strange behaviors in parameter estimation, hypothesis testing, Bayesian inference, model selection, and in particular, the dynamics of learning from examples. Prevailing theories so far have not paid much attention to the problem caused by singularity, relying only on ordinary statistical theories developed for regular (nonsingular) models. Only recently have researchers remarked on the effects of singularity, and theories are now being developed. This article gives an overview of the phenomena caused by the singularities of statistical manifolds related to multilayer perceptrons and gaussian mixtures. We demonstrate our recent results on these problems. Simple toy models are also used to show explicit solutions. We explain that the maximum likelihood estimator is no longer subject to the gaussian distribution even asymptotically, because the Fisher information matrix degenerates, that the model selection criteria such as AIC, BIC, and MDL fail to hold in these models, that a smooth Bayesian prior becomes singular in such models, and that the trajectories of dynamics of learning are strongly affected by the singularity, causing plateaus or slow manifolds in the parameter space. The natural gradient method is shown to perform well because it takes the singular geometrical structure into account. The generalization error and the training error are studied in some examples.",
"title": ""
},
{
"docid": "06c0ee8d139afd11aab1cc0883a57a68",
"text": "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.",
"title": ""
},
{
"docid": "af89b3636290235e0b241c6cced2a336",
"text": "Assume we were to come up with a family of distributions parameterized by θ in order to approximate the posterior, qθ(ω). Our goal is to set θ such that qθ(ω) is as similar to the true posterior p(ω|D) as possible. For clarity, qθ(ω) is a distribution over stochastic parameters ω that is determined by a set of learnable parameters θ and some source of randomness. The approximation is therefore limited by our choice of parametric function qθ(ω) as well as the randomness.1 Given ω and an input x, an output distribution p(y|x,ω) = p(y|fω(x)) = fω(x,y) is induced by observation noise (the conditionality of which is omitted for brevity).",
"title": ""
},
{
"docid": "5ffb3e630e5f020365e471e94d678cbb",
"text": "This paper presents one perspective on recent developments related to software engineering in the industrial automation sector that spans from manufacturing factory automation to process control systems and energy automation systems. The survey's methodology is based on the classic SWEBOK reference document that comprehensively defines the taxonomy of software engineering domain. This is mixed with classic automation artefacts, such as the set of the most influential international standards and dominating industrial practices. The survey focuses mainly on research publications which are believed to be representative of advanced industrial practices as well.",
"title": ""
},
{
"docid": "56fb6fe1f6999b5d7a9dab19e8b877ef",
"text": "Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images.",
"title": ""
},
{
"docid": "e6e91ce66120af510e24a10dee6d64b7",
"text": "AI plays an increasingly prominent role in society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals’ incarceration, and the hiring of new employees, and it’s not difficult to envision that they will in the future underpin most of the decisions in society. Despite the high complexity entailed by this task, there is still not much understanding of basic properties of such systems. For instance, we currently cannot detect (neither explain nor correct) whether an AI system is operating fairly (i.e., is abiding by the decision-constraints agreed by society) or it is reinforcing biases and perpetuating a preceding prejudicial practice. Issues of discrimination have been discussed extensively in legal circles, but there exists still not much understanding of the formal conditions that a system must adhere to be deemed fair. In this paper, we use the language of structural causality (Pearl, 2000) to fill in this gap. We start by introducing three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. We then derive the causal explanation formula, which allows the AI designer to quantitatively evaluate fairness and explain the total observed disparity of decisions through different discriminatory mechanisms. We apply these results to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. We conclude studying the trade-off between different types of fairness criteria (outcome and procedural), and provide a quantitative approach to policy implementation and the design of fair decision-making systems.",
"title": ""
}
] |
scidocsrr
|
36a3ed9566c1bfdb9f07039480a892dc
|
Creation of a 3D printed temporal bone model from clinical CT data.
|
[
{
"docid": "45974f33d79bf4d3af349877ef119508",
"text": "Generation of graspable three-dimensional objects applied for surgical planning, prosthetics and related applications using 3D printing or rapid prototyping is summarized and evaluated. Graspable 3D objects overcome the limitations of 3D visualizations which can only be displayed on flat screens. 3D objects can be produced based on CT or MRI volumetric medical images. Using dedicated post-processing algorithms, a spatial model can be extracted from image data sets and exported to machine-readable data. That spatial model data is utilized by special printers for generating the final rapid prototype model. Patient–clinician interaction, surgical training, medical research and education may require graspable 3D objects. The limitations of rapid prototyping include cost and complexity, as well as the need for specialized equipment and consumables such as photoresist resins. Medical application of rapid prototyping is feasible for specialized surgical planning and prosthetics applications and has significant potential for development of new medical applications.",
"title": ""
}
] |
[
{
"docid": "e48903be16ccab7bf1263e0a407e5d66",
"text": "This research applies Lotka’s Law to metadata on open source software development. Lotka’s Law predicts the proportion of authors at different levels of productivity. Open source software development harnesses the creativity of thousands of programmers worldwide, is important to the progress of the Internet and many other computing environments, and yet has not been widely researched. We examine metadata from the Linux Software Map (LSM), which documents many open source projects, and Sourceforge, one of the largest resources for open source developers. Authoring patterns found are comparable to prior studies of Lotka’s Law for scientific and scholarly publishing. Lotka’s Law was found to be effective in understanding software development productivity patterns, and offer promise in predicting aggregate behavior of open source developers.",
"title": ""
},
{
"docid": "d6fe99533c66075ffb85faf7c70475f0",
"text": "Outlier detection has received significant attention in many applications, such as detecting credit card fraud or network intrusions. Most existing research focuses on numerical datasets, and cannot directly apply to categorical sets where there is little sense in calculating distances among data points. Furthermore, a number of outlier detection methods require quadratic time with respect to the dataset size and usually multiple dataset scans. These characteristics are undesirable for large datasets, potentially scattered over multiple distributed sites. In this paper, we introduce Attribute Value Frequency (A VF), a fast and scalable outlier detection strategy for categorical data. A VF scales linearly with the number of data points and attributes, and relies on a single data scan. AVF is compared with a list of representative outlier detection approaches that have not been contrasted against each other. Our proposed solution is experimentally shown to be significantly faster, and as effective in discovering outliers.",
"title": ""
},
{
"docid": "3d8a102c53c6e594e01afc7ad685c7ab",
"text": "As register allocation is one of the most important phases in optimizing compilers, much work has been done to improve its quality and speed. We present a novel register allocation architecture for programs in SSA-form which simplifies register allocation significantly. We investigate certain properties of SSA-programs and their interference graphs, showing that they belong to the class of chordal graphs. This leads to a quadratic-time optimal coloring algorithm and allows for decoupling the tasks of coloring, spilling and coalescing completely. After presenting heuristic methods for spilling and coalescing, we compare our coalescing heuristic to an optimal method based on integer linear programming.",
"title": ""
},
{
"docid": "774bdacd260740d5345a08f21e0fd8f0",
"text": "This paper presents a new way of categorizing behavior change in a framework called the Behavior Grid. This preliminary work shows 35 types of behavior along two categorical dimensions. To demonstrate the analytical potential for the Behavior Grid, this paper maps behavior goals from Facebook onto the framework, revealing potential patterns of intent. To show the potential for designers of persuasive technology, this paper uses the Behavior Grid to show what types of behavior change might most easily be achieved through mobile technology. The Behavior Grid needs further development, but this early version can still be useful for designers and researchers in thinking more clearly about behavior change and persuasive technology.",
"title": ""
},
{
"docid": "cf768855de6b9c33a1b8284b4e24383f",
"text": "The Value Sensitive Design (VSD) methodology provides a comprehensive framework for advancing a value-centered research and design agenda. Although VSD provides helpful ways of thinking about and designing value-centered computational systems, we argue that the specific mechanics of VSD create thorny tensions with respect to value sensitivity. In particular, we examine limitations due to value classifications, inadequate guidance on empirical tools for design, and the ways in which the design process is ordered. In this paper, we propose ways of maturing the VSD methodology to overcome these limitations and present three empirical case studies that illustrate a family of methods to effectively engage local expressions of values. The findings from our case studies provide evidence of how we can mature the VSD methodology to mitigate the pitfalls of classification and engender a commitment to reflect on and respond to local contexts of design.",
"title": ""
},
{
"docid": "3976419e9f78dbff8ae235dd7aee2d8d",
"text": "A generally intelligent agent must be able to teach itself how to solve problems in complex domains with minimal human supervision. Recently, deep reinforcement learning algorithms combined with self-play have achieved superhuman proficiency in Go, Chess, and Shogi without human data or domain knowledge. In these environments, a reward is always received at the end of the game; however, for many combinatorial optimization environments, rewards are sparse and episodes are not guaranteed to terminate. We introduce Autodidactic Iteration: a novel reinforcement learning algorithm that is able to teach itself how to solve the Rubik’s Cube with no human assistance. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.",
"title": ""
},
{
"docid": "16d3a7217182ad331d85eb619fa459ee",
"text": "Pupil diameter was monitored during picture viewing to assess effects of hedonic valence and emotional arousal on pupillary responses. Autonomic activity (heart rate and skin conductance) was concurrently measured to determine whether pupillary changes are mediated by parasympathetic or sympathetic activation. Following an initial light reflex, pupillary changes were larger when viewing emotionally arousing pictures, regardless of whether these were pleasant or unpleasant. Pupillary changes during picture viewing covaried with skin conductance change, supporting the interpretation that sympathetic nervous system activity modulates these changes in the context of affective picture viewing. Taken together, the data provide strong support for the hypothesis that the pupil's response during affective picture viewing reflects emotional arousal associated with increased sympathetic activity.",
"title": ""
},
{
"docid": "b05d36b98d68c9407e6cb213bcf03709",
"text": "With the continuous increase in data velocity and volume nowadays, preserving system and data security is particularly affected. In order to handle the huge amount of data and to discover security incidents in real-time, analyses of log data streams are required. However, most of the log anomaly detection techniques fall short in considering continuous data processing. Thus, this paper aligns an anomaly detection technique for data stream processing. It thereby provides a conceptual basis for future adaption of other techniques and further delivers proof of concept by prototype implementation.",
"title": ""
},
{
"docid": "b07e438c8bd71765373341c3bf1f9088",
"text": "Procrastination is a common behavior, mainly in school settings. Only a few studies have analyzed the associations of academic procrastination with students' personal and family variables. In the present work, we analyzed the impact of socio-personal variables (e.g., parents' education, number of siblings, school grade level, and underachievement) on students' academic procrastination profiles. Two independent samples of 580 and 809 seventh to ninth graders, students attending the last three years of Portuguese Compulsory Education, have been taken. The findings, similar in both studies, reveal that procrastination decreases when the parents' education is higher, but it increases along with the number of siblings, the grade level, and the underachievement. The results are discussed in view of the findings of previous research. The implications for educational practice are also analyzed.",
"title": ""
},
{
"docid": "c53e0a1762e4b69a2b9e5520e3e0bbfe",
"text": "Conventional public key infrastructure (PKI) designs are not optimal and contain security flaws; there is much work underway in improving PKI. The properties given by the Bitcoin blockchain and its derivatives are a natural solution to some of the problems with PKI in particular, certificate transparency and elimination of single points of failure. Recently-proposed blockchain PKI designs are built as public ledgers linking identity with public key, giving no provision of privacy. We consider the suitability of a blockchain-based PKI for contexts in which PKI is required, but in which linking of identity with public key is undesirable; specifically, we show that blockchain can be used to construct a privacy-aware PKI while simultaneously eliminating some of the problems encountered in conventional PKI.",
"title": ""
},
{
"docid": "ef92244350e267d3b5b9251d496e0ee2",
"text": "A review of recent advances in power wafer level electronic packaging is presented based on the development of power device integration. The paper covers in more detail how advances in both semiconductor content and power advanced wafer level package design and materials have co-enabled significant advances in power device capability during recent years. Extrapolating the same trends in representative areas for the remainder of the decade serves to highlight where further improvement in materials and techniques can drive continued enhancements in usability, efficiency, reliability and overall cost of power semiconductor solutions. Along with next generation wafer level power packaging development, the role of modeling is a key to assure successful package design. An overview of the power package modeling is presented. Challenges of wafer level power semiconductor packaging and modeling in both next generation design and assembly processes are presented and discussed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "df4923225affcd0ad02db3719409d5f2",
"text": "Emotions have a high impact in productivity, task quality, creativity, group rapport and job satisfaction. In this work we use lexical sentiment analysis to study emotions expressed in commit comments of different open source projects and analyze their relationship with different factors such as used programming language, time and day of the week in which the commit was made, team distribution and project approval. Our results show that projects developed in Java tend to have more negative commit comments, and that projects that have more distributed teams tend to have a higher positive polarity in their emotional content. Additionally, we found that commit comments written on Mondays tend to a more negative emotion. While our results need to be confirmed by a more representative sample they are an initial step into the study of emotions and related factors in open source projects.",
"title": ""
},
{
"docid": "e829a46ab8dd560f137b4c11c3626410",
"text": "Modeling dressed characters is known as a very tedious process. It u sually requires specifying 2D fabric patterns, positioning and assembling them in 3D, and then performing a physically-bas ed simulation. The latter accounts for gravity and collisions to compute the rest shape of the garment, with the ad equ te folds and wrinkles. This paper presents a more intuitive way to design virtual clothing. We start w ith a 2D sketching system in which the user draws the contours and seam-lines of the garment directly on a v irtu l mannequin. Our system then converts the sketch into an initial 3D surface using an existing method based on a p recomputed distance field around the mannequin. The system then splits the created surface into different pan els delimited by the seam-lines. The generated panels are typically not developable. However, the panels of a realistic garment must be developable, since each panel must unfold into a 2D sewing pattern. Therefore our sys tem automatically approximates each panel with a developable surface, while keeping them assembled along the s eams. This process allows us to output the corresponding sewing patterns. The last step of our method computes a natural rest shape for the 3D gar ment, including the folds due to the collisions with the body and gravity. The folds are generated using procedu ral modeling of the buckling phenomena observed in real fabric. The result of our algorithm consists of a realistic looking 3D mannequin dressed in the designed garment and the 2D patterns which can be used for distortion free texture mapping. The patterns we create also allow us to sew real replicas of the virtual garments.",
"title": ""
},
{
"docid": "9be80d8f93dd5edd72ecd759993935d6",
"text": "The excretory system regulates the chemical composition of body fluids by removing metabolic wastes and retaining the proper amount of water, salts and nutrients. The invertebrate excretory structures are classified in according to their marked variations in the morphological structures into three types included contractile vacuoles in protozoa, nephridia (flame cell system) in most invertebrate animals and Malpighian tubules (arthropod kidney) in insects [2]. There are three distinct excretory organs formed in succession during the development of the vertebrate kidney, they are called pronephros, mesonephros and metanephros. The pronephros is the most primitive one and exists as a functional kidney only in some of the lowest fishes and is called the archinephros. The mesonephros represents the functional excretory organs in anamniotes and called as opisthonephros. The metanephros is the most caudally located of the excretory organs and the last to appear, it represents the functional kidney in amniotes [2-4].",
"title": ""
},
{
"docid": "79c9f10c5e6fb163b09e9b773af14a3e",
"text": "Small RTTs (~tens of microseconds), bursty flow arrivals, and a large number of concurrent flows (thousands) in datacenters bring fundamental challenges to congestion control as they either force a flow to send at most one packet per RTT or induce a large queue build-up. The widespread use of shallow buffered switches also makes the problem more challenging with hosts generating many flows in bursts. In addition, as link speeds increase, algorithms that gradually probe for bandwidth take a long time to reach the fair-share. An ideal datacenter congestion control must provide 1) zero data loss, 2) fast convergence, 3) low buffer occupancy, and 4) high utilization. However, these requirements present conflicting goals.\n This paper presents a new radical approach, called ExpressPass, an end-to-end credit-scheduled, delay-bounded congestion control for datacenters. ExpressPass uses credit packets to control congestion even before sending data packets, which enables us to achieve bounded delay and fast convergence. It gracefully handles bursty flow arrivals. We implement ExpressPass using commodity switches and provide evaluations using testbed experiments and simulations. ExpressPass converges up to 80 times faster than DCTCP in 10 Gbps links, and the gap increases as link speeds become faster. It greatly improves performance under heavy incast workloads and significantly reduces the flow completion times, especially, for small and medium size flows compared to RCP, DCTCP, HULL, and DX under realistic workloads.",
"title": ""
},
{
"docid": "ad5c10745cd12c0fa47e52eac05907e0",
"text": "Many currently deployed Reinforcement Learning agents work in an environment shared with humans, be them co-workers, users or clients. It is desirable that these agents adjust to people’s preferences, learn faster thanks to their help, and act safely around them. We argue that most current approaches that learn from human feedback are unsafe: rewarding or punishing the agent a-posteriori cannot immediately prevent it from wrong-doing. In this paper, we extend Policy Gradient to make it robust to external directives, that would otherwise break the fundamentally on-policy nature of Policy Gradient. Our technique, Directed Policy Gradient (DPG), allows a teacher or backup policy to override the agent before it acts undesirably, while allowing the agent to leverage human advice or directives to learn faster. Our experiments demonstrate that DPG makes the agent learn much faster than reward-based approaches, while requiring an order of magnitude less advice. .",
"title": ""
},
{
"docid": "23b18b2795b0e5ff619fd9e88821cfad",
"text": "Goal-oriented dialogue has been paid attention for its numerous applications in artificial intelligence. To solve this task, deep learning and reinforcement learning have recently been applied. However, these approaches struggle to find a competent recurrent neural questioner, owing to the complexity of learning a series of sentences. Motivated by theory of mind, we propose “Answerer in Questioner’s Mind” (AQM), a novel algorithm for goal-oriented dialogue. With AQM, a questioner asks and infers based on an approximated probabilistic model of the answerer. The questioner figures out the answerer’s intent via selecting a plausible question by explicitly calculating the information gain of the candidate intentions and possible answers to each question. We test our framework on two goal-oriented visual dialogue tasks: “MNIST Counting Dialog” and “GuessWhat?!.” In our experiments, AQM outperforms comparative algorithms and makes human-like dialogue. We further use AQM as a tool for analyzing the mechanism of deep reinforcement learning approach and discuss the future direction of practical goal-oriented neural dialogue systems.",
"title": ""
},
{
"docid": "1926166029995392a9ccb3c64bc10ee7",
"text": "OBJECTIVES\nFew low income countries have emergency medical services to provide prehospital medical care and transport to road traffic crash casualties. In Ghana most roadway casualties receive care and transport to the hospital from taxi, bus, or truck drivers. This study reports the methods used to devise a model for prehospital trauma training for commercial drivers in Ghana.\n\n\nMETHODS\nOver 300 commercial drivers attended a first aid and rescue course designed specifically for roadway trauma and geared to a low education level. The training programme has been evaluated twice at one and two year intervals by interviewing both trained and untrained drivers with regard to their experiences with injured persons. In conjunction with a review of prehospital care literature, lessons learnt from the evaluations were used in the revision of the training model.\n\n\nRESULTS\nControl of external haemorrhage was quickly learnt and used appropriately by the drivers. Areas identified needing emphasis in future trainings included consistent use of universal precautions and protection of airways in unconscious persons using the recovery position.\n\n\nCONCLUSION\nIn low income countries, prehospital trauma care for roadway casualties can be improved by training laypersons already involved in prehospital transport and care. Training should be locally devised, evidence based, educationally appropriate, and focus on practical demonstrations.",
"title": ""
}
] |
scidocsrr
|
8b348748c9ee826a7a3cee3402d3a67f
|
A Survey of Algorithms for Dense Subgraph Discovery
|
[
{
"docid": "2663e9e25bd27aefd8ca22b1acc6441f",
"text": "Over the years, frequent itemset discovery algorithms have been used to solve various interesting problems. As data mining techniques are being increasingly applied to non-traditional domains, existing approaches for finding frequent itemsets cannot be used as they cannot model the requirement of these domains. An alternate way of modeling the objects in these data sets, is to use a graph to model the database objects. Within that model, the problem of finding frequent patterns becomes that of discovering subgraphs that occur frequently over the entire set of graphs. In this paper we present a computationally efficient algorithm for finding all frequent subgraphs in large graph databases. We evaluated the performance of the algorithm by experiments with synthetic datasets as well as a chemical compound dataset. The empirical results show that our algorithm scales linearly with the number of input transactions and it is able to discover frequent subgraphs from a set of graph transactions reasonably fast, even though we have to deal with computationally hard problems such as canonical labeling of graphs and subgraph isomorphism which are not necessary for traditional frequent itemset discovery.",
"title": ""
}
] |
[
{
"docid": "28d8ef2f63b0b4f55c60ae06484365d1",
"text": "Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data.\n We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks.\n In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method.",
"title": ""
},
{
"docid": "b8d840944817351bb2969a745b55f5c6",
"text": ".............................................................................................................................................................. 7 Tiivistelmä .......................................................................................................................................................... 9 List of original papers .................................................................................................................................. 11 Acknowledgements ..................................................................................................................................... 13",
"title": ""
},
{
"docid": "66e7979aff5860f713dffd10e98eed3d",
"text": "The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation.1",
"title": ""
},
{
"docid": "1152f211403964121ee688b0f7ec3443",
"text": "Wi-Fi (IEEE 802.11), is emerging as the primary medium for wireless Internet access. Cellular carriers are increasingly offloading their traffic to Wi-Fi Access Points to overcome capacity challenges, limited RF spectrum availability, cost of deployment, and keep up with the traffic demands driven by user generated content. The ubiquity of Wi-Fi and its emergence as a universal wireless interface makes it the perfect tracking device. The Wi-Fi offloading trend provides ample opportunities for adversaries to collect samples (e.g., Wi-Fi probes) and track the mobility patterns and location of users. In this work, we show that RF fingerprinting of Wi-Fi devices is feasible using commodity software defined radio platforms. We developed a framework for reproducible RF fingerprinting analysis of Wi-Fi cards. We developed a set of techniques for distinguishing Wi-Fi cards, most are unique to the IEEE802.11a/g/p standard, including scrambling seed pattern, carrier frequency offset, sampling frequency offset, transient ramp-up/down periods, and a symmetric Kullback-Liebler divergence-based separation technique. We evaluated the performance of our techniques over a set of 93 Wi-Fi devices spanning 13 models of cards. In order to assess the potential of the proposed techniques on similar devices, we used 3 sets of 26 Wi-Fi devices of identical model. Our results, indicate that it is easy to distinguish between models with a success rate of 95%. It is also possible to uniquely identify a device with 47% success rate if the samples are collected within a 10s interval of time.",
"title": ""
},
{
"docid": "01cf7cb5dd78d5f7754e1c31da9a9eb9",
"text": "Today ́s Electronic Industry is changing at a high pace. The root causes are manifold. So world population is growing up to eight billions and gives new challenges in terms of urbanization, mobility and connectivity. Consequently, there will raise up a lot of new business models for the electronic industry. Connectivity will take a large influence on our lives. Concepts like Industry 4.0, internet of things, M2M communication, smart homes or communication in or to cars are growing up. All these applications are based on the same demanding requirement – a high amount of data and increased data transfer rate. These arguments bring up large challenges to the Printed Circuit Board (PCB) design and manufacturing. This paper investigates the impact of different PCB manufacturing technologies and their relation to their high frequency behavior. In the course of the paper a brief overview of PCB manufacturing capabilities is be presented. Moreover, signal losses in terms of frequency, design, manufacturing processes, and substrate materials are investigated. The aim of this paper is, to develop a concept to use materials in combination with optimized PCB manufacturing processes, which allows a significant reduction of losses and increased signal quality. First analysis demonstrate, that for increased signal frequency, demanded by growing data transfer rate, the capabilities to manufacture high frequency PCBs become a key factor in terms of losses. Base materials with particularly high speed properties like very low dielectric constants are used for efficient design of high speed data link lines. Furthermore, copper foils with very low treatment are to be used to minimize loss caused by the skin effect. In addition to the materials composition, the design of high speed circuits is optimized with the help of comprehensive simulations studies. The work on this paper focuses on requirements and main questions arising during the PCB manufacturing process in order to improve the system in terms of losses. For that matter, there are several approaches that can be used. For example, the optimization of the structuring process, the use of efficient interconnection capabilities, and dedicated surface finishing can be used to reduce losses and preserve signal integrity. In this study, a comparison of different PCB manufacturing processes by using measurement results of demonstrators that imitate real PCB applications will be discussed. Special attention has be drawn to the manufacturing capabilities which are optimized for high frequency requirements and focused to avoid signal loss. Different line structures like microstrip lines, coplanar waveguides, and surface integrated waveguides are used for this assessment. This research was carried out by Austria Technologie & Systemtechnik AG (AT&S AG), in cooperation with Vienna University of Technology, Institute of Electrodynamics, Microwave and Circuit Engineering. Introduction Several commercially available PCB fabrication processes exist for manufacturing PCBs. In this paper two methods, pattern plating and panel plating, were utilized for manufacturing the test samples. The first step in both described manufacturing processes is drilling, which allows connections in between different copper layers. The second step for pattern plating (see figure 1) is the flash copper plating process, wherein only a thin copper skin (flash copper) is plated into the drilled holes and over the entire surface. On top of the plated copper a layer of photosensitive etch resist is laminated which is imaged subsequently by ultraviolet (UV) light with a negative film. Negative film imaging is exposing the gaps in between the traces to the UV light. In developing process the non-exposed dry film is removed with a sodium solution. After that, the whole surrounding space is plated with copper and is eventually covered by tin. The tin layer protects the actual circuit pattern during etching. The pattern plating process shows typically a smaller line width tolerance, compared to panel plating, because of a lower copper thickness before etching. The overall process tolerance for narrow dimensions in the order of several tenths of μm is approximately ± 10%. As originally published in the IPC APEX EXPO Conference Proceedings.",
"title": ""
},
{
"docid": "903d00a02846450ebd18a8ce865889b5",
"text": "The ability to solve probability word problems such as those found in introductory discrete mathematics textbooks, is an important cognitive and intellectual skill. In this paper, we develop a two-step endto-end fully automated approach for solving such questions that is able to automatically provide answers to exercises about probability formulated in natural language. In the first step, a question formulated in natural language is analysed and transformed into a highlevel model specified in a declarative language. In the second step, a solution to the high-level model is computed using a probabilistic programming system. On a dataset of 2160 probability problems, our solver is able to correctly answer 97.5% of the questions given a correct model. On the end-toend evaluation, we are able to answer 12.5% of the questions (or 31.1% if we exclude examples not supported by design).",
"title": ""
},
{
"docid": "832916685b22b536d1e8e85f0eeb0e14",
"text": "People have always sought an attractive smile in harmony with an esthetic appearance. This trend is steadily growing as it influences one’s self esteem and psychological well-being.1,2 Faced with highly esthetic demanding patients, the practitioner should guarantee esthetic outcomes involving conservative procedures. This is undoubtedly challenging and often requiring a perfect multidisciplinary approach.3",
"title": ""
},
{
"docid": "a63bfd773444b0ac70700a840a844743",
"text": "The utility of thermal inkjet (TIJ) technology for preparing solid dosage forms of drugs was examined. Solutions of prednisolone in a solvent mixture of ethanol, water, and glycerol (80/17/3 by volume) were dispensed onto poly(tetrafluoroethylene)-coated fiberglass films using TIJ cartridges and a personal printer and using a micropipette for comparison. The post-dried, TIJ-dispensed samples were shown to contain a mixture of prednisolone Forms I and III based on PXRD analyses that were confirmed by Raman analyses. The starting commercial material was determined to be Form I. Samples prepared by dispensing the solution from a micropipette initially showed only Form I; subsequent Raman mapping of these samples revealed the presence of two polymorphs. Raman mapping of the TIJ-dispensed samples also showed both polymorphs. The results indicate that the solvent mixture used in the dispensing solution combined with the thermal treatment of the samples after dispensing were likely the primary reason for the generation of the two polymorphs. The advantages of using a multidisciplinary approach to characterize drug delivery systems are demonstrated using solid state mapping techniques. Both PXRD and Raman spectroscopy were needed to fully characterize the samples. Finally, this report clarifies prednisolone's polymorphic nomenclature existent in the scientific literature.",
"title": ""
},
{
"docid": "9e7fc71def2afc58025ff5e0198148d0",
"text": "BACKGROUD\nWith the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects.\n\n\nNEW METHOD\nWe have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability.\n\n\nRESULTS\nThis study demonstrates new methods for computing and visualizing 'grand' ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute.",
"title": ""
},
{
"docid": "c87112a95e41fccd9fc33bedf45e2bb5",
"text": "Smart grid introduces a wealth of promising applications for upcoming fifth-generation mobile networks (5G), enabling households and utility companies to establish a two-way digital communications dialogue, which can benefit both of them. The utility can monitor real-time consumption of end users and take proper measures (e.g., real-time pricing) to shape their consumption profile or to plan enough supply to meet the foreseen demand. On the other hand, a smart home can receive real-time electricity prices and adjust its consumption to minimize its daily electricity expenditure, while meeting the energy need and the satisfaction level of the dwellers. Smart Home applications for smart phones are also a promising use case, where users can remotely control their appliances, while they are away at work or on their ways home. Although these emerging services can evidently boost the efficiency of the market and the satisfaction of the consumers, they may also introduce new attack surfaces making the grid vulnerable to financial losses or even physical damages. In this paper, we propose an architecture to secure smart grid communications incorporating an intrusion detection system, composed of distributed components collaborating with each other to detect price integrity or load alteration attacks in different segments of an advanced metering infrastructure.",
"title": ""
},
{
"docid": "586ba74140fb7f68cc7c5b0990fb7352",
"text": "Hotel companies are struggling to keep up with the rapid consumer adoption of social media. Although many companies have begun to develop social media programs, the industry has yet to fully explore the potential of this emerging data and communication resource. The revenue management department, as it evolves from tactical inventory management to a more expansive role across the organization, is poised to be an early adopter of the opportunities afforded by social media. We propose a framework for evaluating social media-related revenue management opportunities, discuss the issues associated with leveraging these opportunities and propose a roadmap for future research in this area. Journal of Revenue and Pricing Management (2011) 10, 293–305. doi:10.1057/rpm.2011.12; published online 6 May 2011",
"title": ""
},
{
"docid": "7bc2c428a43437afbbdb880ea9431288",
"text": "Multi-label image classification aims to predict multiple labels for a single image which contains diverse content. By utilizing label correlations, various techniques have been developed to improve classification performance. However, current existing methods either neglect image features when exploiting label correlations or lack the ability to learn image-dependent conditional label structures. In this paper, we develop conditional graphical Lasso (CGL) to handle these challenges. CGL provides a unified Bayesian framework for structure and parameter learning conditioned on image features. We formulate the multi-label prediction as CGL inference problem, which is solved by a mean field variational approach. Meanwhile, CGL learning is efficient due to a tailored proximal gradient procedure by applying the maximum a posterior (MAP) methodology. CGL performs competitively for multi-label image classification on benchmark datasets MULAN scene, PASCAL VOC 2007 and PASCAL VOC 2012, compared with the state-of-the-art multi-label classification algorithms.",
"title": ""
},
{
"docid": "139cff6c4b5deebe1138e3bb6bec182b",
"text": "Persuasive technologies aim to influence user's behaviors. In order to be effective, many of the persuasive technologies developed so far relies on user's motivation and ability, which is highly variable and often the reason behind the failure of such technology. In this paper, we present the concept of Mindless Computing, which is a new approach to persuasive technology design. Mindless Computing leverages theories and concepts from psychology and behavioral economics into the design of technologies for behavior change. We show through a systematic review that most of the current persuasive technologies do not utilize the fast and automatic mental processes for behavioral change and there is an opportunity for persuasive technology designers to develop systems that are less reliant on user's motivation and ability. We describe two examples of mindless technologies and present pilot studies with encouraging results. Finally, we discuss design guidelines and considerations for developing this type of persuasive technology.",
"title": ""
},
{
"docid": "f7c46115abe7cc204dd7dbd56f9e13c6",
"text": "Forecasting of future electricity demand is very important for decision making in power system operation and planning. In recent years, due to privatization and deregulation of the power industry, accurate electricity forecasting has become an important research area for efficient electricity production. This paper presents a time series approach for mid-term load forecasting (MTLF) in order to predict the daily peak load for the next month. The proposed method employs a computational intelligence scheme based on the self-organizing map (SOM) and support vector machine (SVM). According to the similarity degree of the time series load data, SOM is used as a clustering tool to cluster the training data into two subsets, using the Kohonen rule. As a novel machine learning technique, the support vector regression (SVR) is used to fit the testing data based on the clustered subsets, for predicting the daily peak load. Our proposed SOM-SVR load forecasting model is evaluated in MATLAB on the electricity load dataset provided by the Eastern Slovakian Electricity Corporation, which was used in the 2001 European Network on Intelligent Technologies (EUNITE) load forecasting competition. Power load data obtained from (i) Tenaga Nasional Berhad (TNB) for peninsular Malaysia and (ii) PJM for the eastern interconnection grid of the United States of America is used to benchmark the performance of our proposed model. Experimental results obtained indicate that our proposed SOM-SVR technique gives significantly good prediction accuracy for MTLF compared to previously researched findings using the EUNITE, Malaysian and PJM electricity load",
"title": ""
},
{
"docid": "0c5c83cfb63b335b327f044973514d23",
"text": "With the explosion of healthcare information, there has been a tremendous amount of heterogeneous textual medical knowledge (TMK), which plays an essential role in healthcare information systems. Existing works for integrating and utilizing the TMK mainly focus on straightforward connections establishment and pay less attention to make computers interpret and retrieve knowledge correctly and quickly. In this paper, we explore a novel model to organize and integrate the TMK into conceptual graphs. We then employ a framework to automatically retrieve knowledge in knowledge graphs with a high precision. In order to perform reasonable inference on knowledge graphs, we propose a contextual inference pruning algorithm to achieve efficient chain inference. Our algorithm achieves a better inference result with precision and recall of 92% and 96%, respectively, which can avoid most of the meaningless inferences. In addition, we implement two prototypes and provide services, and the results show our approach is practical and effective.",
"title": ""
},
{
"docid": "8b3dffbb60d75f042c29a22340383453",
"text": "Welcome to the course: Gazing at Games: Using Eye Tracking to Control Virtual Characters. I will start with a short introduction of the course which will give you an idea of its aims and structure. I will also talk a bit about my background and research interests and motivate why I think this work is important.",
"title": ""
},
{
"docid": "12d0d14ce1bc94a7346fd00c26631e9b",
"text": "In visual saliency estimation, one of the most challenging tasks is to distinguish targets and distractors that share certain visual attributes. With the observation that such targets and distractors can sometimes be easily separated when projected to specific subspaces, we propose to estimate image saliency by learning a set of discriminative subspaces that perform the best in popping out targets and suppressing distractors. Toward this end, we first conduct principal component analysis on massive randomly selected image patches. The principal components, which correspond to the largest eigenvalues, are selected to construct candidate subspaces since they often demonstrate impressive abilities to separate targets and distractors. By projecting images onto various subspaces, we further characterize each image patch by its contrasts against randomly selected neighboring and peripheral regions. In this manner, the probable targets often have the highest responses, while the responses at background regions become very low. Based on such random contrasts, an optimization framework with pairwise binary terms is adopted to learn the saliency model that best separates salient targets and distractors by optimally integrating the cues from various subspaces. Experimental results on two public benchmarks show that the proposed approach outperforms 16 state-of-the-art methods in human fixation prediction.",
"title": ""
},
{
"docid": "01d93a621bb6d52ca37650d4a79c43f3",
"text": "Recommender systems are a classical example for machine learning applications, however, they have not yet been used extensively in health informatics and medical scenarios. We argue that this is due to the specifics of benchmarking criteria in medical scenarios and the multitude of drastically differing end-user groups and the enormous contextcomplexity of the medical domain. Here both risk perceptions towards data security and privacy as well as trust in safe technical systems play a central and specific role, particularly in the clinical context. These aspects dominate acceptance of such systems. By using a Doctor-in-theLoop approach some of these difficulties could be mitigated by combining both human expertise with computer efficiency. We provide a three-part research framework to access health recommender systems, suggesting to incorporate domain understanding, evaluation and specific methodology into the development process.",
"title": ""
}
] |
scidocsrr
|
6ac2ce9b4ff1957ba459881dd4b625f8
|
Data Storage Security and Privacy in Cloud Computing : A Comprehensive Survey
|
[
{
"docid": "02564434d1dab0031718a10400a59593",
"text": "The advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in cloud, it is crucial for the search service to allow multi-keyword query and provide result similarity ranking to meet the effective data retrieval need. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely differentiate the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality. Among various multi-keyword semantics, we choose the efficient principle of \" coordinate matching \" , i.e., as many matches as possible, to capture the similarity between search query and data documents, and further use \" inner product similarity \" to quantitatively formalize such principle for similarity measurement. We first propose a basic MRSE scheme using secure inner product computation, and then significantly improve it to meet different privacy requirements in two levels of threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given, and experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication. INTRODUCTION Due to the rapid expansion of data, the data owners tend to store their data into the cloud to release the burden of data storage and maintenance [1]. However, as the cloud customers and the cloud server are not in the same trusted domain, our outsourced data may be under the exposure to the risk. Thus, before sent to the cloud, the sensitive data needs to be encrypted to protect for data privacy and combat unsolicited accesses. Unfortunately, the traditional plaintext search methods cannot be directly applied to the encrypted cloud data any more. The traditional information retrieval (IR) has already provided multi-keyword ranked search for the data user. In the same way, the cloud server needs provide the data user with the similar function, while protecting data and search privacy. It …",
"title": ""
}
] |
[
{
"docid": "c39fe902027ba5cb5f0fa98005596178",
"text": "Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: [email protected]; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals.",
"title": ""
},
{
"docid": "a90a20f66d3e73947fbc28dc60bcee24",
"text": "It is well known that the performance of speech recognition algorithms degrade in the presence of adverse environments where a speaker is under stress, emotion, or Lombard effect. This study evaluates the effectiveness of traditional features in recognition of speech under stress and formulates new features which are shown to improve stressed speech recognition. The focus is on formulating robust features which are less dependent on the speaking conditions rather than applying compensation or adaptation techniques. The stressed speaking styles considered are simulated angry and loud, Lombard effect speech, and noisy actual stressed speech from the SUSAS database which is available on CD-ROM through the NATO IST/TG-01 research group and LDC1 . In addition, this study investigates the immunity of linear prediction power spectrum and fast Fourier transform power spectrum to the presence of stress. Our results show that unlike fast Fourier transform’s (FFT) immunity to noise, the linear prediction power spectrum is more immune than FFT to stress as well as to a combination of a noisy and stressful environment. Finally, the effect of various parameter processing such as fixed versus variable preemphasis, liftering, and fixed versus cepstral mean normalization are studied. Two alternative frequency partitioning methods are proposed and compared with traditional mel-frequency cepstral coefficients (MFCC) features for stressed speech recognition. It is shown that the alternate filterbank frequency partitions are more effective for recognition of speech under both simulated and actual stressed conditions.",
"title": ""
},
{
"docid": "ce8262364b1a1b840e50f876c6d959fe",
"text": "Architectural styles, object-oriented design, and design patterns all hold promise as approaches that simplify software design and reuse by capturing and exploiting system design knowledge. This article explores the capabilities and roles of the various approaches, their strengths, and their limitations. oftware system builders increasingly recognize the importance of exploiting design knowledge in the engineering of new systems. Several distinct but related approaches hold promise. One approach is to focus on the architectural level of system design—the gross structure of a system as a composition of interacting parts. Architectural designs illuminate such key issues as scaling and portability, the assignment of functionality to design elements, interaction protocols between elements, and global system properties such as processing rates, end-to-end capacities, and overall performance.1 Architectural descriptions tend to be informal and idiosyncratic: box-and-line diagrams convey essential system structure, with accompanying prose explaining the meaning of the symbols. Nonetheless, they provide a critical staging point for determining whether a system can meet its essential requirements, and they guide implementers in constructing the system. More recently, architectural descriptions have been used for codifying and reusing design knowledge. Much of their power comes from use of idiomatic architectural terms, such as “clientserver system,” “layered system,” or “blackboard organization.”",
"title": ""
},
{
"docid": "d2521791d515b69d5a4a8c9ea02e3d17",
"text": "In this paper, four-wheel active steering (4WAS), which can control the front wheel steering angle and rear wheel steering angle independently, has been investigated based on the analysis of deficiency of conventional four wheel steering (4WS). A model following control structure is adopted to follow the desired yaw rate and vehicle sideslip angle, which consists of feedforward and feedback controller. The feedback controller is designed based on the optimal control theory, minimizing the tracking errors between the outputs of actual vehicle model and that of linear reference model. Finally, computer simulations are performed to evaluate the proposed control system via the co-simulation of Matlab/Simulink and CarSim. Simulation results show that the designed 4WAS controller can achieve the good response performance and improve the vehicle handling and stability.",
"title": ""
},
{
"docid": "09d4f38c87d6cc0e2cb6b1a7caad10f8",
"text": "Semidefinite programs (SDPs) can be solved in polynomial time by interior point methods, but scalability can be an issue. To address this shortcoming, over a decade ago, Burer and Monteiro proposed to solve SDPs with few equality constraints via rank-restricted, non-convex surrogates. Remarkably, for some applications, local optimization methods seem to converge to global optima of these non-convex surrogates reliably. Although some theory supports this empirical success, a complete explanation of it remains an open question. In this paper, we consider a class of SDPs which includes applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations. We show that the low-rank Burer–Monteiro formulation of SDPs in that class almost never has any spurious local optima. This paper was corrected on April 9, 2018. Theorems 2 and 4 had the assumption that M (1) is a manifold. From this assumption it was stated that TYM = {Ẏ ∈ Rn×p : A(Ẏ Y >+ Y Ẏ >) = 0}, which is not true in general. To ensure this identity, the theorems now make the stronger assumption that gradients of the constraintsA(Y Y >) = b are linearly independent for all Y inM. All examples treated in the paper satisfy this assumption. Appendix D gives details.",
"title": ""
},
{
"docid": "065466185ba541472ae84e0b5cf5e864",
"text": "A significant challenge for crowdsourcing has been increasing worker engagement and output quality. We explore the effects of social, learning, and financial strategies, and their combinations, on increasing worker retention across tasks and change in the quality of worker output. Through three experiments, we show that 1) using these strategies together increased workers' engagement and the quality of their work; 2) a social strategy was most effective for increasing engagement; 3) a learning strategy was most effective in improving quality. The findings of this paper provide strategies for harnessing the crowd to perform complex tasks, as well as insight into crowd workers' motivation.",
"title": ""
},
{
"docid": "d984489b4b71eabe39ed79fac9cf27a1",
"text": "Remote sensing from airborne and spaceborne platforms provides valuable data for mapping, environmental monitoring, disaster management and civil and military intelligence. However, to explore the full value of these data, the appropriate information has to be extracted and presented in standard format to import it into geo-information systems and thus allow efficient decision processes. The object-oriented approach can contribute to powerful automatic and semiautomatic analysis for most remote sensing applications. Synergetic use to pixel-based or statistical signal processing methods explores the rich information contents. Here, we explain principal strategies of object-oriented analysis, discuss how the combination with fuzzy methods allows implementing expert knowledge and describe a representative example for the proposed workflow from remote sensing imagery to GIS. The strategies are demonstrated using the first objectoriented image analysis software on the market, eCognition, which provides an appropriate link between remote sensing",
"title": ""
},
{
"docid": "0d28ddef1fa86942da679aec23dff890",
"text": "Electronic patient records remain a rather unexplored, but potentially rich data source for discovering correlations between diseases. We describe a general approach for gathering phenotypic descriptions of patients from medical records in a systematic and non-cohort dependent manner. By extracting phenotype information from the free-text in such records we demonstrate that we can extend the information contained in the structured record data, and use it for producing fine-grained patient stratification and disease co-occurrence statistics. The approach uses a dictionary based on the International Classification of Disease ontology and is therefore in principle language independent. As a use case we show how records from a Danish psychiatric hospital lead to the identification of disease correlations, which subsequently can be mapped to systems biology frameworks.",
"title": ""
},
{
"docid": "106fefb169c7e95999fb411b4e07954e",
"text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.",
"title": ""
},
{
"docid": "be9cea5823779bf5ced592f108816554",
"text": "Undoubtedly, bioinformatics is one of the fastest developing scientific disciplines in recent years. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. There is already a significant number of books on bioinformatics. Some are introductory and require almost no prior experience in biology or computer science: “Bioinformatics Basics Applications in Biological Science and Medicine” and “Introduction to Bioinformatics.” Others are targeted to biologists entering the field of bioinformatics: “Developing Bioinformatics Computer Skills.” Some more specialized books are: “An Introduction to Support Vector Machines : And Other Kernel-Based Learning Methods”, “Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids”, “Pattern Discovery in Bimolecular Data : Tools, Techniques, and Applications”, “Computational Molecular Biology: An Algorithmic Approach.” The book subject of this review has a broad scope. “Bioinformatics: The machine learning approach” is aimed at two types of researchers and students. First are the biologists and biochemists who need to understand new data-driven algorithms, such as neural networks and hidden Markov",
"title": ""
},
{
"docid": "221d346a3ef1821438d388335c2d3a13",
"text": "Integrating data mining into business processes becomes crucial for business today. Modern business process management frameworks provide great support for flexible design, deployment and management of business processes. However, integrating complex data mining services into such frameworks is not trivial due to unclear definitions of user roles and missing flexible data mining services as well as missing standards and methods for the deployment of data mining solutions. This work contributes an integrated view on the definition of user roles for business, IT and data mining and discusses the integration of data mining in business processes and its evaluation in the context of BPR.",
"title": ""
},
{
"docid": "ba8886a9e251492ec0dca0512d6994be",
"text": "In this paper, we consider various moment inequalities for sums of random matrices—which are well–studied in the functional analysis and probability theory literature—and demonstrate how they can be used to obtain the best known performance guarantees for several problems in optimization. First, we show that the validity of a recent conjecture of Nemirovski is actually a direct consequence of the so–called non–commutative Khintchine’s inequality in functional analysis. Using this result, we show that an SDP–based algorithm of Nemirovski, which is developed for solving a class of quadratic optimization problems with orthogonality constraints, has a logarithmic approximation guarantee. This improves upon the polynomial approximation guarantee established earlier by Nemirovski. Furthermore, we obtain improved safe tractable approximations of a certain class of chance constrained linear matrix inequalities. Secondly, we consider a recent result of Delage and Ye on the so–called data–driven distributionally robust stochastic programming problem. One of the assumptions in the Delage–Ye result is that the underlying probability distribution has bounded support. However, using a suitable moment inequality, we show that the result in fact holds for a much larger class of probability distributions. Given the close connection between the behavior of sums of random matrices and the theoretical properties of various optimization problems, we expect that the moment inequalities discussed in this paper will find further applications in optimization.",
"title": ""
},
{
"docid": "cb18b8d464261ac4b46587e6a31efce0",
"text": "This paper critically analyses the foundations of three widely advocated information security management standards (BS7799, GASPP and SSE-CMM). The analysis reveals several fundamental problems related to these standards, casting serious doubts on their validity. The implications for research and practice, in improving information security management standards, are considered.",
"title": ""
},
{
"docid": "f9dc4cfb42a5ec893f5819e03c64d4bc",
"text": "For human pose estimation in monocular images, joint occlusions and overlapping upon human bodies often result in deviated pose predictions. Under these circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of joint inter-connectivity. To address the problem by incorporating priors about the structure of human bodies, we propose a novel structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator (G) generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors.,,To better capture the structure dependency of human body joints, the generator G is designed in a stacked multi-task manner to predict poses as well as occlusion heatmaps. Then, the pose and occlusion heatmaps are sent to the discriminators to predict the likelihood of the pose being real. Training of the network follows the strategy of conditional Generative Adversarial Networks (GANs). The effectiveness of the proposed network is evaluated on two widely used human pose estimation benchmark datasets. Our approach significantly outperforms the state-of-the-art methods and almost always generates plausible human pose predictions.",
"title": ""
},
{
"docid": "6fbd64c7b38493c432bb140c544f3235",
"text": "It is well-known that people love food. However, an insane diet can cause problems in the general health of the people. Since health is strictly linked to the diet, advanced computer vision tools to recognize food images (e.g. acquired with mobile/wearable cameras), as well as their properties (e.g., calories), can help the diet monitoring by providing useful information to the experts (e.g., nutritionists) to assess the food intake of patients (e.g., to combat obesity). The food recognition is a challenging task since the food is intrinsically deformable and presents high variability in appearance. Image representation plays a fundamental role. To properly study the peculiarities of the image representation in the food application context, a benchmark dataset is needed. These facts motivate the work presented in this paper. In this work we introduce the UNICT-FD889 dataset. It is the first food image dataset composed by over 800 distinct plates of food which can be used as benchmark to design and compare representation models of food images. We exploit the UNICT-FD889 dataset for Near Duplicate Image Retrieval (NDIR) purposes by comparing three standard state-of-the-art image descriptors: Bag of Textons, PRICoLBP and SIFT. Results confirm that both textures and colors are fundamental properties in food representation. Moreover the experiments point out that the Bag of Textons representation obtained considering the color domain is more accurate than the other two approaches for NDIR.",
"title": ""
},
{
"docid": "6f68ed77668f21696051947a8ccc4f56",
"text": "Most discussions of computer security focus on control of disclosure. In Particular, the U.S. Department of Defense has developed a set of criteria for computer mechanisms to provide control of classified information. However, for that core of data processing concerned with business operation and control of assets, the primary security concern is data integrity. This paper presents a policy for data integrity based on commercial data processing practices, and compares the mechanisms needed for this policy with the mechanisms needed to enforce the lattice model for information security. We argue that a lattice model is not sufficient to characterize integrity policies, and that distinct mechanisms are needed to Control disclosure and to provide integrity.",
"title": ""
},
{
"docid": "448dc3c1c5207e606f1bd3b386f8bbde",
"text": "Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and identically distributed. However, for many important datasets, such as time-series of images, this assumption is too strong: accounting for covariances between samples, such as those in time, can yield to a more appropriate model specification and improve performance in downstream tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue. The GPPVAE aims to combine the power of VAEs with the ability to model correlations afforded by GP priors. To achieve efficient inference in this new class of models, we leverage structure in the covariance matrix, and introduce a new stochastic backpropagation strategy that allows for computing stochastic gradients in a distributed and low-memory fashion. We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications.",
"title": ""
},
{
"docid": "3fae9d0778c9f9df1ae51ad3b5f62a05",
"text": "This paper argues for the utility of back-end driven onloading to the edge as a way to address bandwidth use and latency challenges for future device-cloud interactions. Supporting such edge functions (EFs) requires solutions that can provide (i) fast and scalable EF provisioning and (ii) strong guarantees for the integrity of the EF execution and confidentiality of the state stored at the edge. In response to these goals, we (i) present a detailed design space exploration of the current technologies that can be leveraged in the design of edge function platforms (EFPs), (ii) develop a solution to address security concerns of EFs that leverages emerging hardware support for OS agnostic trusted execution environments such as Intel SGX enclaves, and (iii) propose and evaluate AirBox, a platform for fast, scalable and secure onloading of edge functions.",
"title": ""
},
{
"docid": "89460f94140b9471b120674ddd904948",
"text": "Cross-disciplinary research on collective intelligence considers that groups, like individuals, have a certain level of intelligence. For example, the study by Woolley et al. (2010) indicates that groups which perform well on one type of task will perform well on others. In a pair of empirical studies of groups interacting face-to-face, they found evidence of a collective intelligence factor, a measure of consistent group performance across a series of tasks, which was highly predictive of performance on a subsequent, more complex task. This collective intelligence factor differed from the individual intelligence of group members, and was significantly predicted by members’ social sensitivity – the ability to understand the emotions of others based on visual facial cues (Baron-Cohen et al. 2001).",
"title": ""
},
{
"docid": "1c83671ad725908b2d4a6467b23fc83f",
"text": "Although many IT and business managers today may be lured into business intelligence (BI) investments by the promise of predictive analytics and emerging BI trends, creating an enterprise-wide BI capability is a journey that takes time. This article describes Norfolk Southern Railway’s BI journey, which began in the early 1990s with departmental reporting, evolved into data warehousing and analytic applications, and has resulted in a company that today uses BI to support corporate strategy. We describe how BI at Norfolk Southern evolved over several decades, with the company developing strong BI foundations and an effective enterprise-wide BI capability. We also identify the practices that kept the BI journey “on track.” These practices can be used by other IT and business leaders as they plan and develop BI capabilities in their own organizations.",
"title": ""
}
] |
scidocsrr
|
5d32d1925bb1e65a24ee95a7d8eb8198
|
An algorithm to find relationships between web vulnerabilities
|
[
{
"docid": "72d51fd4b384f4a9c3f6fe70606ab120",
"text": "Cloud Computing is a flexible, cost-effective, and proven delivery platform for providing business or consumer IT services over the Internet. However, cloud Computing presents an added level of risk because essential services are often outsourced to a third party, which makes it harder to maintain data security and privacy, support data and service availability, and demonstrate compliance. Cloud Computing leverages many technologies (SOA, virtualization, Web 2.0); it also inherits their security issues, which we discuss here, identifying the main vulnerabilities in this kind of systems and the most important threats found in the literature related to Cloud Computing and its environment as well as to identify and relate vulnerabilities and threats with possible solutions.",
"title": ""
}
] |
[
{
"docid": "7de84d62d8fdc0dc466417ed36c6ec66",
"text": "Sensing current is a fundamental function in power supply circuits, especially as it generally applies to protection and feedback control. Emerging state-of-the-art switching supplies, in fact, are now exploring ways to use this sensed-current information to improve transient response, power efficiency, and compensation performance by appropriately self-adjusting, on the fly, frequency, inductor ripple current, switching configuration (e.g., synchronous to/from asynchronous), and other operating parameters. The discontinuous, non-integrated, and inaccurate nature of existing lossless current-sensing schemes, however, impedes their widespread adoption, and lossy solutions are not acceptable. Lossless, filter-based techniques are continuous, but inaccurate when integrated on-chip because of the inherent mismatches between the filter and the power inductor. The proposed GM-C filter-based, fully integrated current-sensing CMOS scheme circumvents this accuracy limitation by introducing a self-learning sequence to start-up and power-on-reset. During these seldom-occurring events, the gain and bandwidth of the internal filter are matched to the response of the power inductor and its equivalent series resistance (ESR), effectively measuring their values. A 0.5 mum CMOS realization of the proposed scheme was fabricated and applied to a current-mode buck switching supply, achieving overall DC and AC current-gain errors of 8% and 9%, respectively, at 0.8 A DC load and 0.2 A ripple currents for 3.5 muH-14 muH inductors with ESRs ranging from 48 mOmega to 384 mOmega (other lossless, state-of-the-art solutions achieve 20%-40% error, and only when the nominal specifications of the power MOSFET and/or inductor are known). Since the self-learning sequence is non-recurring, the power losses associated with the foregoing solution are minimal, translating to a 2.6% power efficiency savings when compared to the more traditional but accurate series-sense resistor (e.g., 50 mOmega) technique.",
"title": ""
},
{
"docid": "be90932dfddcf02b33fc2ef573b8c910",
"text": "Style-based Text Categorization: What Newspaper Am I Reading?",
"title": ""
},
{
"docid": "5b3709a34402fc135fdd135c77454f11",
"text": "A Ka-band 4×4 Butler matrix feeding a 4-element linear antenna array has been presented in this paper. The Butler matrix is based on a rectangular coaxial structure, constructed using five layers of gold coated micromachined silicon slices. The patch antennas are of an air-filled microstrip type, and spaced by half a wavelength at 38 GHz to form the array. The demonstrated device is 26 mm by 23 mm in size and 1.5 mm in height. The measured return losses at all input ports are better than −10 dB between 34.4 and 38.3 GHz. The measured radiation pattern of one beam has shown good agreement with the simulations.",
"title": ""
},
{
"docid": "3a502851ee6df1d210d709d8e8d4b831",
"text": "CREATION onsumers today have more choices of products and services than ever before, but they seem dissatisfied. Firms invest in greater product variety but are less able to differentiate themselves. Growth and value creation have become the dominant themes for managers. In this paper, we explain this paradox. The meaning of value and the process of value creation are rapidly shifting from a product-and firm-centric view to personalized consumer experiences. Informed, networked, empowered, and active consumers are increasingly co-creating value with the firm. The interaction between the firm and the consumer is becoming the locus of value creation and value extraction. As value shifts to experiences, the market is becoming a forum for conversation and interactions between consumers, consumer communities, and firms. It is this dialogue, access, transparency, and understanding of risk-benefits that is central to the next practice in value creation.",
"title": ""
},
{
"docid": "6e418e90a03a44380381d5c45c4e705b",
"text": "This paper presents a system for recognizing static hand gestures of alphabet in Bangla Sign Language (BSL). A BSL finger spelling and an alphabet gesture recognition system was designed with Artificial Neural Network (ANN) and constructed in order to translate the BSL alphabet into the corresponding printed Bangla letters. The proposed ANN is trained with features of sign alphabet using feed-forward backpropagation learning algorithm. Logarithmic sigmoid (logsig) function is chosen as transfer function. This ANN model demonstrated a good recognition performance with the mean square error values in this training function. This recognition system does not use any gloves or visual marking systems. This system only requires the images of the bare hand for the recognition. The Simulation results show that this system is able to recognize 36 selected letters of BSL alphabet with an average accuracy of 80.902%.",
"title": ""
},
{
"docid": "10e41955aea6710f198744ac1f201d64",
"text": "Current research on culture focuses on independence and interdependence and documents numerous East-West psychological differences, with an increasing emphasis placed on cognitive mediating mechanisms. Lost in this literature is a time-honored idea of culture as a collective process composed of cross-generationally transmitted values and associated behavioral patterns (i.e., practices). A new model of neuro-culture interaction proposed here addresses this conceptual gap by hypothesizing that the brain serves as a crucial site that accumulates effects of cultural experience, insofar as neural connectivity is likely modified through sustained engagement in cultural practices. Thus, culture is \"embrained,\" and moreover, this process requires no cognitive mediation. The model is supported in a review of empirical evidence regarding (a) collective-level factors involved in both production and adoption of cultural values and practices and (b) neural changes that result from engagement in cultural practices. Future directions of research on culture, mind, and the brain are discussed.",
"title": ""
},
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
},
{
"docid": "85c4c0ffb224606af6bc3af5411d31ca",
"text": "Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-tofine attention models lag behind state-ofthe-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.",
"title": ""
},
{
"docid": "87ca3f4c11e4853a4b2a153d5b9f1bfe",
"text": "The study of light verbs and complex predicates is frought wi th dangers and misunderstandings that go beyond the merely terminological. This paper attemp s to pic through the terminological, theoretical and empirical jungle in order to arrive at a nove l understanding of the role of light verbs crosslinguistically. In particular, this paper addresses how light verbs and complex predicates can be identified crosslinguistically, what the relationsh ip between the two is, and whether light verbs must always be associated with uniform syntactic and s emantic properties. Finally, the paper proposes a novel view of how light verbs are situated in the le xicon by addressing some historical data and their relationship with preverbs and verb particle s. Jespersen (1965,Volume VI:117) is generally credited with first coining the termlight verb, which he applied to English V+NP constructions as in (1).",
"title": ""
},
{
"docid": "f7f5b4ba1ca99e1492654cc1384a92d0",
"text": "We present an online semi-supervised dictionary learning algorithm for classification tasks. Specifically, we integrate the reconstruction error of labeled and unlabeled data, the discriminative sparse-code error, and the classification error into an objective function for online dictionary learning, which enhances the dictionary’s representative and discriminative power. In addition, we propose a probabilistic model over the sparse codes of input signals, which allows us to expand the labeled set. As a consequence, the dictionary and the classifier learned from the enlarged labeled set yield lower generalization error on unseen data. Our approach learns a single dictionary and a predictive linear classifier jointly. Experimental results demonstrate the effectiveness of our approach in face and object category recognition applications.",
"title": ""
},
{
"docid": "1e1cad07832b4f37ce5573592e3a8074",
"text": "The current BSC guidance issued by the FDA allows for biowaivers based on conservative criteria. Possible new criteria and class boundaries are proposed for additional biowaivers based on the underlying physiology of the gastrointestinal tract. The proposed changes in new class boundaries for solubility and permeability are as follows: 1. Narrow the required solubility pH range from 1.0-7.5 to 1.0-6.8. 2. Reduce the high permeability requirement from 90% to 85%. The following new criterion and potential biowaiver extension require more research: 1. Define a new intermediate permeability class boundary. 2. Allow biowaivers for highly soluble and intermediately permeable drugs in IR solid oral dosage forms with no less than 85% dissolved in 15 min in all physiologically relevant dissolution media, provided these IR products contain only known excipients that do not affect the oral drug absorption. The following areas require more extensive research: 1. Increase the dose volume for solubility classification to 500 mL. 2. Include bile salt in the solubility measurement. 3. Use the intrinsic dissolution method for solubility classification. 4. Define an intermediate solubility class for BCS Class II drugs. 5. Include surfactants in in vitro dissolution testing.",
"title": ""
},
{
"docid": "2e6081fc296fbe22c97d1997a77093f6",
"text": "Despite the security community's best effort, the number of serious vulnerabilities discovered in software is increasing rapidly. In theory, security audits should find and remove the vulnerabilities before the code ever gets deployed. However, due to the enormous amount of code being produced, as well as a the lack of manpower and expertise, not all code is sufficiently audited. Thus, many vulnerabilities slip into production systems. A best-practice approach is to use a code metric analysis tool, such as Flawfinder, to flag potentially dangerous code so that it can receive special attention. However, because these tools have a very high false-positive rate, the manual effort needed to find vulnerabilities remains overwhelming. In this paper, we present a new method of finding potentially dangerous code in code repositories with a significantly lower false-positive rate than comparable systems. We combine code-metric analysis with metadata gathered from code repositories to help code review teams prioritize their work. The paper makes three contributions. First, we conducted the first large-scale mapping of CVEs to GitHub commits in order to create a vulnerable commit database. Second, based on this database, we trained a SVM classifier to flag suspicious commits. Compared to Flawfinder, our approach reduces the amount of false alarms by over 99 % at the same level of recall. Finally, we present a thorough quantitative and qualitative analysis of our approach and discuss lessons learned from the results. We will share the database as a benchmark for future research and will also provide our analysis tool as a web service.",
"title": ""
},
{
"docid": "b63635129ab0663efa374b83f2b77944",
"text": "Cannabis sativa L. is an important herbaceous species originating from Central Asia, which has been used in folk medicine and as a source of textile fiber since the dawn of times. This fast-growing plant has recently seen a resurgence of interest because of its multi-purpose applications: it is indeed a treasure trove of phytochemicals and a rich source of both cellulosic and woody fibers. Equally highly interested in this plant are the pharmaceutical and construction sectors, since its metabolites show potent bioactivities on human health and its outer and inner stem tissues can be used to make bioplastics and concrete-like material, respectively. In this review, the rich spectrum of hemp phytochemicals is discussed by putting a special emphasis on molecules of industrial interest, including cannabinoids, terpenes and phenolic compounds, and their biosynthetic routes. Cannabinoids represent the most studied group of compounds, mainly due to their wide range of pharmaceutical effects in humans, including psychotropic activities. The therapeutic and commercial interests of some terpenes and phenolic compounds, and in particular stilbenoids and lignans, are also highlighted in view of the most recent literature data. Biotechnological avenues to enhance the production and bioactivity of hemp secondary metabolites are proposed by discussing the power of plant genetic engineering and tissue culture. In particular two systems are reviewed, i.e., cell suspension and hairy root cultures. Additionally, an entire section is devoted to hemp trichomes, in the light of their importance as phytochemical factories. Ultimately, prospects on the benefits linked to the use of the -omics technologies, such as metabolomics and transcriptomics to speed up the identification and the large-scale production of lead agents from bioengineered Cannabis cell culture, are presented.",
"title": ""
},
{
"docid": "d2125afa5927946c17d434cb42011870",
"text": "This paper presents an enhancement to the earlier developed Vector Field Histogram (VFH) method for mobile robot obstacle avoidance. The enhanced method, called VFH*, successfully deals with situations that are problematic for purely local obstacle avoidance algorithms. The VFH* method verifies that a particular candidate direction guides the robot around an obstacle. The verification is performed by using the A* search algorithm and appropriate cost and heuristic functions.",
"title": ""
},
{
"docid": "0c1f01d9861783498c44c7c3d0acd57e",
"text": "We understand a sociotechnical system as a multistakeholder cyber-physical system. We introduce governance as the administration of such a system by the stakeholders themselves. In this regard, governance is a peer-to-peer notion and contrasts with traditional management, which is a top-down hierarchical notion. Traditionally, there is no computational support for governance and it is achieved through out-of-band interactions among system administrators. Not surprisingly, traditional approaches simply do not scale up to large sociotechnical systems.\n We develop an approach for governance based on a computational representation of norms in organizations. Our approach is motivated by the Ocean Observatory Initiative, a thirty-year $400 million project, which supports a variety of resources dealing with monitoring and studying the world's oceans. These resources include autonomous underwater vehicles, ocean gliders, buoys, and other instrumentation as well as more traditional computational resources. Our approach has the benefit of directly reflecting stakeholder needs and assuring stakeholders of the correctness of the resulting governance decisions while yielding adaptive resource allocation in the face of changes in both stakeholder needs and physical circumstances.",
"title": ""
},
{
"docid": "ab15d55e8308843c526aed0c32db1cb2",
"text": "ix Chapter 1: Introduction 1 1.1 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Human-Robot Communication . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 2: Background and Related Work 11 2.1 Manual Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Task-Level Robot Control . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Learning from Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Demonstration Approaches . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 Policy Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Life-Long Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 3: Learning from Demonstration 19 3.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Role of the Instructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Role of the Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 Knowledge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Human-Robot Communication . . . . . . . . . . . . . . . . . . . 24 3.4.2 System Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.5 Learning a Task Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30",
"title": ""
},
{
"docid": "1e8da27c0c8243e4b43c223287eec0e3",
"text": "In this paper, we propose a new framework for hierarchical image segmentation based on iterative contraction and merging. In the proposed framework, we treat the hierarchical image segmentation problem as a sequel of optimization problems, with each optimization process being realized by a contraction-and-merging process to identify and merge the most similar data pairs at the current resolution. At the beginning, we perform pixel-based contraction and merging to quickly combine image pixels into initial region-elements with visually indistinguishable intra-region color difference. After that, we iteratively perform region-based contraction and merging to group adjacent regions into larger ones to progressively form a segmentation dendrogram for hierarchical segmentation. Comparing with the state-of-the-art techniques, the proposed algorithm can not only produce high-quality segmentation results in a more efficient way, but also keep a lot of boundary details in the segmentation results.",
"title": ""
},
{
"docid": "19a28d8bbb1f09c56f5c85be003a9586",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
},
{
"docid": "a05fdb35110cd940cd6f4f64e950d1d0",
"text": "The vector control system of permanent magnet synchronous motor based on sliding mode observer (SMO) is studied in this paper. On the basis of analyzing the traditional sliding mode observer, an improved sliding mode observer is proposed. Firstly, by using the hyperbolic tangent function instead of the traditional symbol function, the chattering of the system is suppressed. Secondly, a low pass filter which has the variable cutoff frequency along with the rotor speed is designed to reduce the phase delay. Then, by using Kalman filter, it could make back EMF information more smoothly. Finally, in order to obtain accurate position and velocity information, the method of phase-locked loop (PLL) is proposed to estimate the position and speed of the rotor. The simulation results show that the new algorithm can not only improve the accuracy of the position and speed estimation of the rotor but reduce the chattering of the system.",
"title": ""
},
{
"docid": "9a7e6d0b253de434e62eb6998ff05f47",
"text": "Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 105 general concepts spanning human reality. Most of the time has been spent codifying knowledge about these concepts; approximately 106 commonsense axioms have been handcrafted for and entered into CYC's knowledge base, and millions more have been inferred and cached by CYC. This article examines the fundamental assumptions of doing such a large-scale project, reviews the technical lessons learned by the developers, and surveys the range of applications that are or soon will be enabled by the technology.",
"title": ""
}
] |
scidocsrr
|
a17978c2b85c8efb21ea7c0c5172f9cf
|
System Characteristics, Satisfaction and E-Learning Usage: A Structural Equation Model (SEM)
|
[
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
},
{
"docid": "49db1291f3f52a09037d6cfd305e8b5f",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
},
{
"docid": "4f51f8907402f9859a77988f967c755f",
"text": "As a promising solution, electronic learning (e-learning) has been widely adopted by many companies to offer learning-on-demand opportunities to individual employees for reducing training time and cost. While information systems (IS) success models have received much attention among researchers, little research has been conducted to assess the success and/or effectiveness of e-learning systems in an organizational context. Whether traditional information systems success models can be extended to investigating e-learning systems success is rarely addressed. Based on the previous IS success literature, this study develops and validates a multidimensional model for assessing e-learning systems success (ELSS) from employee (e-learner) perspectives. The procedures used in conceptualizing an ELSS construct, generating items, collecting data, and validating a multiple-item scale for measuring ELSS are described. This paper presents evidence of the scale’s factor structure, reliability, content validity, criterion-related validity, convergent validity, and discriminant validity on the basis of analyzing data from a sample of 206 respondents. Theoretical and managerial implications of our results are then discussed. This empirically validated instrument will be useful to researchers in developing and testing e-learning systems theories, as well as to organizations in implementing successful e-learning systems.",
"title": ""
}
] |
[
{
"docid": "b41d8ca866268133f2af88495dad6482",
"text": "Text clustering is an important area of interest in the field of Text summarization, sentiment analysis etc. There have been a lot of algorithms experimented during the past years, which have a wide range of performances. One of the most popular method used is k-means, where an initial assumption is made about k, which is the number of clusters to be generated. Now a new method is introduced where the number of clusters is found using a modified spectral bisection and then the output is given to a genetic algorithm where the final solution is obtained. Keywords— Cluster, Spectral Bisection, Genetic Algorithm, kmeans.",
"title": ""
},
{
"docid": "d59b64b96cc79a2e21e705c021473f2a",
"text": "Bovine colostrum (first milk) contains very high concentrations of IgG, and on average 1 kg (500 g/liter) of IgG can be harvested from each immunized cow immediately after calving. We used a modified vaccination strategy together with established production systems from the dairy food industry for the large-scale manufacture of broadly neutralizing HIV-1 IgG. This approach provides a low-cost mucosal HIV preventive agent potentially suitable for a topical microbicide. Four cows were vaccinated pre- and/or postconception with recombinant HIV-1 gp140 envelope (Env) oligomers of clade B or A, B, and C. Colostrum and purified colostrum IgG were assessed for cross-clade binding and neutralization against a panel of 27 Env-pseudotyped reporter viruses. Vaccination elicited high anti-gp140 IgG titers in serum and colostrum with reciprocal endpoint titers of up to 1 × 10(5). While nonimmune colostrum showed some intrinsic neutralizing activity, colostrum from 2 cows receiving a longer-duration vaccination regimen demonstrated broad HIV-1-neutralizing activity. Colostrum-purified polyclonal IgG retained gp140 reactivity and neutralization activity and blocked the binding of the b12 monoclonal antibody to gp140, showing specificity for the CD4 binding site. Colostrum-derived anti-HIV antibodies offer a cost-effective option for preparing the substantial quantities of broadly neutralizing antibodies that would be needed in a low-cost topical combination HIV-1 microbicide.",
"title": ""
},
{
"docid": "5623321fb6c3a7c0b22980ce663632cd",
"text": "Vector representations for language have been shown to be useful in a number of Natural Language Processing (NLP) tasks. In this thesis, we aim to investigate the effectiveness of word vector representations for the research problem of Aspect-Based Sentiment Analysis (ABSA), which attempts to capture both semantic and sentiment information encoded in user generated content such as product reviews. In particular, we target three ABSA sub-tasks: aspect term extraction, aspect category detection, and aspect sentiment prediction. We investigate the effectiveness of vector representations over different text data, and evaluate the quality of domain-dependent vectors. We utilize vector representations to compute various vector-based features and conduct extensive experiments to demonstrate their effectiveness. Using simple vector-based features, we achieve F1 scores of 79.9% for aspect term extraction, 86.7% for category detection, and 72.3% for aspect sentiment prediction. Co Thesis Supervisor: James Glass Title: Senior Research Scientist Co Thesis Supervisor: Mitra Mohtarami Title: Postdoctoral Associate 3",
"title": ""
},
{
"docid": "f8c7fcba6d0cb889836dc868f3ba12c8",
"text": "This article reviews dominant media portrayals of mental illness, the mentally ill and mental health interventions, and examines what social, emotional and treatment-related effects these may have. Studies consistently show that both entertainment and news media provide overwhelmingly dramatic and distorted images of mental illness that emphasise dangerousness, criminality and unpredictability. They also model negative reactions to the mentally ill, including fear, rejection, derision and ridicule. The consequences of negative media images for people who have a mental illness are profound. They impair self-esteem, help-seeking behaviours, medication adherence and overall recovery. Mental health advocates blame the media for promoting stigma and discrimination toward people with a mental illness. However, the media may also be an important ally in challenging public prejudices, initiating public debate, and projecting positive, human interest stories about people who live with mental illness. Media lobbying and press liaison should take on a central role for mental health professionals, not only as a way of speaking out for patients who may not be able to speak out for themselves, but as a means of improving public education and awareness. Also, given the consistency of research findings in this field, it may now be time to shift attention away from further cataloguing of media representations of mental illness to the more challenging prospect of how to use the media to improve the life chances and recovery possibilities for the one in four people living with mental disorders.",
"title": ""
},
{
"docid": "8d30afbccfa76492b765f69d34cd6634",
"text": "Commonsense knowledge is vital to many natural language processing tasks. In this paper, we present a novel open-domain conversation generation model to demonstrate how large-scale commonsense knowledge can facilitate language understanding and generation. Given a user post, the model retrieves relevant knowledge graphs from a knowledge base and then encodes the graphs with a static graph attention mechanism, which augments the semantic information of the post and thus supports better understanding of the post. Then, during word generation, the model attentively reads the retrieved knowledge graphs and the knowledge triples within each graph to facilitate better generation through a dynamic graph attention mechanism. This is the first attempt that uses large-scale commonsense knowledge in conversation generation. Furthermore, unlike existing models that use knowledge triples (entities) separately and independently, our model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs. Experiments show that the proposed model can generate more appropriate and informative responses than stateof-the-art baselines.",
"title": ""
},
{
"docid": "fb31665935c1a0964e70c864af8ff46f",
"text": "In the context of object and scene recognition, state-of-the-art performances are obtained with visual Bag-of-Words (BoW) models of mid-level representations computed from dense sampled local descriptors (e.g., Scale-Invariant Feature Transform (SIFT)). Several methods to combine low-level features and to set mid-level parameters have been evaluated recently for image classification. In this chapter, we study in detail the different components of the BoW model in the context of image classification. Particularly, we focus on the coding and pooling steps and investigate the impact of the main parameters of the BoW pipeline. We show that an adequate combination of several low (sampling rate, multiscale) and mid-level (codebook size, normalization) parameters is decisive to reach good performances. Based on this analysis, we propose a merging scheme that exploits the specificities of edge-based descriptors. Low and high contrast regions are pooled separately and combined to provide a powerful representation of images. We study the impact on classification performance of the contrast threshold that determines whether a SIFT descriptor corresponds to a low contrast region or a high contrast region. Successful experiments are provided on the Caltech-101 and Scene-15 datasets. M. T. Law (B) · N. Thome · M. Cord LIP6, UPMC—Sorbonne University, Paris, France e-mail: [email protected] N. Thome e-mail: [email protected] M. Cord e-mail: [email protected] B. Ionescu et al. (eds.), Fusion in Computer Vision, Advances in Computer 29 Vision and Pattern Recognition, DOI: 10.1007/978-3-319-05696-8_2, © Springer International Publishing Switzerland 2014",
"title": ""
},
{
"docid": "ed5b6ea3b1ccc22dff2a43bea7aaf241",
"text": "Testing is an important process that is performed to support quality assurance. Testing activities support quality assurance by gathering information about the nature of the software being studied. These activities consist of designing test cases, executing the software with those test cases, and examining the results produced by those executions. Studies indicate that more than fifty percent of the cost of software development is devoted to testing, with the percentage for testing critical software being even higher. As software becomes more pervasive and is used more often to perform critical tasks, it will be required to be of higher quality. Unless we can find efficient ways to perform effective testing, the percentage of development costs devoted to testing will increase significantly. This report briefly assesses the state of the art in software testing, outlines some future directions in software testing, and gives some pointers to software testing resources.",
"title": ""
},
{
"docid": "a602a532a7b95eae050d084e10606951",
"text": "Municipal solid waste management has emerged as one of the greatest challenges facing environmental protection agencies in developing countries. This study presents the current solid waste management practices and problems in Nigeria. Solid waste management is characterized by inefficient collection methods, insufficient coverage of the collection system and improper disposal. The waste density ranged from 280 to 370 kg/m3 and the waste generation rates ranged from 0.44 to 0.66 kg/capita/day. The common constraints faced environmental agencies include lack of institutional arrangement, insufficient financial resources, absence of bylaws and standards, inflexible work schedules, insufficient information on quantity and composition of waste, and inappropriate technology. The study suggested study of institutional, political, social, financial, economic and technical aspects of municipal solid waste management in order to achieve sustainable and effective solid waste management in Nigeria.",
"title": ""
},
{
"docid": "1c66d84dfc8656a23e2a4df60c88ab51",
"text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.",
"title": ""
},
{
"docid": "6eca26209b9fcca8a9df76307108a3a8",
"text": "Transform-based lossy compression has a huge potential for hyperspectral data reduction. Hyperspectral data are 3-D, and the nature of their correlation is different in each dimension. This calls for a careful design of the 3-D transform to be used for compression. In this paper, we investigate the transform design and rate allocation stage for lossy compression of hyperspectral data. First, we select a set of 3-D transforms, obtained by combining in various ways wavelets, wavelet packets, the discrete cosine transform, and the Karhunen-Loegraveve transform (KLT), and evaluate the coding efficiency of these combinations. Second, we propose a low-complexity version of the KLT, in which complexity and performance can be balanced in a scalable way, allowing one to design the transform that better matches a specific application. Third, we integrate this, as well as other existing transforms, in the framework of Part 2 of the Joint Photographic Experts Group (JPEG) 2000 standard, taking advantage of the high coding efficiency of JPEG 2000, and exploiting the interoperability of an international standard. We introduce an evaluation framework based on both reconstruction fidelity and impact on image exploitation, and evaluate the proposed algorithm by applying this framework to AVIRIS scenes. It is shown that the scheme based on the proposed low-complexity KLT significantly outperforms previous schemes as to rate-distortion performance. As for impact on exploitation, we consider multiclass hard classification, spectral unmixing, binary classification, and anomaly detection as benchmark applications",
"title": ""
},
{
"docid": "d2836880ac69bf35e53f5bc6de8bc5dc",
"text": "There is currently significant interest in freeform, curve-based authoring of graphic images. In particular, \"diffusion curves\" facilitate graphic image creation by allowing an image designer to specify naturalistic images by drawing curves and setting colour values along either side of those curves. Recently, extensions to diffusion curves based on the biharmonic equation have been proposed which provide smooth interpolation through specified colour values and allow image designers to specify colour gradient constraints at curves. We present a Boundary Element Method (BEM) for rendering diffusion curve images with smooth interpolation and gradient constraints, which generates a solved boundary element image representation. The diffusion curve image can be evaluated from the solved representation using a novel and efficient line-by-line approach. We also describe \"curve-aware\" upsampling, in which a full resolution diffusion curve image can be upsampled from a lower resolution image using formula evaluated orrections near curves. The BEM solved image representation is compact. It therefore offers advantages in scenarios where solved image representations are transmitted to devices for rendering and where PDE solving at the device is undesirable due to time or processing constraints.",
"title": ""
},
{
"docid": "235e1f328a847fa7b6e074a58defed0b",
"text": "A stemming algorithm, a procedure to reduce all words with the same stem to a common form, is useful in many areas of computational linguistics and information-retrieval work. While the form of the algorithm varies with its application, certain linguistic problems are common to any stemming procedure. As a basis for evaluation of previous attempts to deal with these problems, this paper first discusses the theoretical and practical attributes of stemming algorithms. Then a new version of a context-sensitive, longest-match stemming algorithm for English is proposed; though developed for use in a library information transfer system, it is of general application. A major linguistic problem in stemming, variation in spelling of stems, is discussed in some detail and several feasible programmed solutions are outlined, along with sample results of one of these methods.",
"title": ""
},
{
"docid": "8e50613e8aab66987d650cd8763811e5",
"text": "Along with the great increase of internet and e-commerce, the use of credit card is an unavoidable one. Due to the increase of credit card usage, the frauds associated with this have also increased. There are a lot of approaches used to detect the frauds. In this paper, behavior based classification approach using Support Vector Machines are employed and efficient feature extraction method also adopted. If any discrepancies occur in the behaviors transaction pattern then it is predicted as suspicious and taken for further consideration to find the frauds. Generally credit card fraud detection problem suffers from a large amount of data, which is rectified by the proposed method. Achieving finest accuracy, high fraud catching rate and low false alarms are the main tasks of this approach.",
"title": ""
},
{
"docid": "4a9474c0813646708400fc02c344a976",
"text": "Over the years, the Web has shrunk the world, allowing individuals to share viewpoints with many more people than they are able to in real life. At the same time, however, it has also enabled anti-social and toxic behavior to occur at an unprecedented scale. Video sharing platforms like YouTube receive uploads from millions of users, covering a wide variety of topics and allowing others to comment and interact in response. Unfortunately, these communities are periodically plagued with aggression and hate attacks. In particular, recent work has showed how these attacks often take place as a result of “raids,” i.e., organized efforts coordinated by ad-hoc mobs from third-party communities. Despite the increasing relevance of this phenomenon, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human reviews. In this paper, we propose an automated solution to identify videos that are likely to be targeted by coordinated harassers. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of raid victims. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with high accuracy (AUC up to 94%). Overall, our work paves the way for providing video platforms like YouTube with proactive systems to detect and mitigate coordinated hate attacks.",
"title": ""
},
{
"docid": "733a7a024f5e408323f9b037828061bb",
"text": "Hidden Markov model (HMM) is one of the popular techniques for story segmentation, where hidden Markov states represent the topics, and the emission distributions of n-gram language model (LM) are dependent on the states. Given a text document, a Viterbi decoder finds the hidden story sequence, with a change of topic indicating a story boundary. In this paper, we propose a discriminative approach to story boundary detection. In the HMM framework, we use deep neural network (DNN) to estimate the posterior probability of topics given the bag-ofwords in the local context. We call it the DNN-HMM approach. We consider the topic dependent LM as a generative modeling technique, and the DNN-HMM as the discriminative solution. Experiments on topic detection and tracking (TDT2) task show that DNN-HMM outperforms traditional n-gram LM approach significantly and achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "3ab4c2383569fc02f0395e79070dc16d",
"text": "A report released last week by the US National Academies makes recommendations for tackling the issues surrounding the era of petabyte science.",
"title": ""
},
{
"docid": "f006fff7ddfaed4b6016d59377144b7a",
"text": "In this paper I consider whether traditional behaviors of animals, like traditions of humans, are transmitted by imitation learning. Review of the literature on problem solving by captive primates, and detailed consideration of two widely cited instances of purported learning by imitation and of culture in free-living primates (sweet-potato washing by Japanese macaques and termite fishing by chimpanzees), suggests that nonhuman primates do not learn to solve problems by imitation. It may, therefore, be misleading to treat animal traditions and human culture as homologous (rather than analogous) and to refer to animal traditions as cultural.",
"title": ""
},
{
"docid": "745451b3ca65f3388332232b370ea504",
"text": "This article develops a framework that applies to single securities to test whether asset pricing models can explain the size, value, and momentum anomalies. Stock level beta is allowed to vary with firm-level size and book-to-market as well as with macroeconomic variables. With constant beta, none of the models examined capture any of the market anomalies. When beta is allowed to vary, the size and value effects are often explained, but the explanatory power of past return remains robust. The past return effect is captured by model mispricing that varies with macroeconomic variables.",
"title": ""
},
{
"docid": "a00acd7a9a136914bf98478ccd85e812",
"text": "Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks.",
"title": ""
},
{
"docid": "26aee4feb558468d571138cd495f51d3",
"text": "A 300-MHz, custom 64-bit VLSI, second-generation Alpha CPU chip has been developed. The chip was designed in a 0.5-um CMOS technology using four levels of metal. The die size is 16.5 mm by 18.1 mm, contains 9.3 million transistors, operates at 3.3 V, and supports 3.3-V/5.0-V interfaces. Power dissipation is 50 W. It contains an 8-KB instruction cache; an 8-KB data cache; and a 96-KB unified second-level cache. The chip can issue four instructions per cycle and delivers 1,200 mips/600 MFLOPS (peak). Several noteworthy circuit and implementation techniques were used to attain the target operating frequency.",
"title": ""
}
] |
scidocsrr
|
68e258b3686c79a5539d85d4f4c9ec1f
|
Enterprise Architecture Principles: Literature Review and Research Directions
|
[
{
"docid": "92b61bc041b3b35687ba1cd6f5468941",
"text": "Many organizations adopt cyclical processes to articulate and engineer technological responses to their business needs. Their objective is to increase competitive advantage and add value to the organization's processes, services and deliverables, in line with the organization's vision and strategy. The major challenges in achieving these objectives include the rapid changes in the business and technology environments themselves, such as changes to business processes, organizational structure, architectural requirements, technology infrastructure and information needs. No activity or process is permanent in the organization. To achieve their objectives, some organizations have adopted an Enterprise Architecture (EA) approach, others an Information Technology (IT) strategy approach, and yet others have adopted both EA and IT strategy for the same primary objectives. The deployment of EA and IT strategy for the same aims and objectives raises question whether there is conflict in adopting both approaches. The paper and case study presented here, aimed at both academics and practitioners, examines how EA could be employed as IT strategy to address both business and IT needs and challenges.",
"title": ""
},
{
"docid": "adcb28fcc215a74313d583c520ed3036",
"text": "I t’s rare that a business stays just as it began year after year.So if we agree that businesses evolve, it follows that information systems must evolve to keep pace.So far,so good.The disconnect occurs when an enterprise’s management knows that the information systems must evolve,but keeps patching and whipping the legacy systems to meet one more requirement. If you put the problem in its simplest terms,management has a choice about how it will grow its information systems.If there is a clear strategic vision for the enterprise, it seems logical to have an equally broad vision for the systems that support that strategy. Managers can thus choose to plan evolution,or they can react when reality hits and “evolve”parts of the information system according to the latest crisis. It’s a bit of a no-brainer as to which is the better choice. But it’s also easy to understand why few enterprises pick it. Conceiving, planning, and monitoring systems that support a long-range strategic vision is not trivial. Enterprise-wide information systems typically start from a base of legacy systems.And not just any legacy systems.They are typically unwieldy systems of systems with a staggering array of hardware,software, design strategies, and implementation platforms. To make the job even more difficult, “enterprise-wide” often means city to city, state to state, or even country to country. Getting these pieces to seamlessly interact and evolve according to long-range strategic business objectives may seem like mission impossible; for a large distributed organization,however, it is mission critical.",
"title": ""
}
] |
[
{
"docid": "00309acd08acb526f58a70ead2d99249",
"text": "As mainstream news media and political campaigns start to pay attention to the political discourse online, a systematic analysis of political speech in social media becomes more critical. What exactly do people say on these sites, and how useful is this data in estimating political popularity? In this study we examine Twitter discussions surrounding seven US Republican politicians who were running for the US Presidential nomination in 2011. We show this largely negative rhetoric to be laced with sarcasm and humor and dominated by a small portion of users. Furthermore, we show that using out-of-the-box classification tools results in a poor performance, and instead develop a highly optimized multi-stage approach designed for general-purpose political sentiment classification. Finally, we compare the change in sentiment detected in our dataset before and after 19 Republican debates, concluding that, at least in this case, the Twitter political chatter is not indicative of national political polls.",
"title": ""
},
{
"docid": "198967b505c9ded9255bff7b82fb2781",
"text": "Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.",
"title": ""
},
{
"docid": "134ecc62958fa9bb930ff934c5fad7a3",
"text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.",
"title": ""
},
{
"docid": "69a32a7a206284ca5f749ffe456bc6dc",
"text": "Urinary incontinence is the inability to willingly control bladder voiding. Stress urinary incontinence (SUI) is the most frequently occurring type of incontinence in women. No widely accepted or approved drug therapy is yet available for the treatment of stress urinary incontinence. Numerous studies have implicated the neurotransmitters, serotonin and norepinephrine in the central neural control of the lower urinary tract function. The pudendal somatic motor nucleus of the spinal cord is densely innervated by 5HT and NE terminals. Pharmacological studies confirm central modulation of the lower urinary tract activity by 5HT and NE receptor agonists and antagonists. Duloxetine is a combined serotonin/norepinephrine reuptake inhibitor currently under clinical investigation for the treatment of women with stress urinary incontinence. Duloxetine exerts balanced in vivo reuptake inhibition of 5HT and NE and exhibits no appreciable binding affinity for receptors of neurotransmitters. The action of duloxetine in the treatment of stress urinary incontinence is associated with reuptake inhibition of serotonin and norepinephrine at the presynaptic neuron in Onuf’s nucleus of the sacral spinal cord. In cats, whose bladder had initially been irritated with acetic acid, a dose–dependent improvement of the bladder capacity (5–fold) and periurethral EMG activity (8–fold) of the striated sphincter muscles was found. In a double blind, randomized, placebocontrolled, clinical trial in women with stress urinary incontinence, there was a significant reduction in urinary incontinence episodes under duloxetine treatment. In summary, the pharmacological effect of duloxetine to increase the activity of the striated urethral sphincter together with clinical results indicate that duloxetine has an interesting therapeutic potential in patients with stress urinary incontinence.",
"title": ""
},
{
"docid": "4a4a0dde01536789bd53ec180a136877",
"text": "CONTEXT\nCurrent assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice.\n\n\nOBJECTIVES\nTo propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment.\n\n\nDATA SOURCES\nWe searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents.\n\n\nSTUDY SELECTION\nWe excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations.\n\n\nDATA EXTRACTION\nData were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs.\n\n\nDATA SYNTHESIS\nWe generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes.\n\n\nCONCLUSIONS\nIn addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.",
"title": ""
},
{
"docid": "5e7297c25f2aafe8dbb733944ddc29e7",
"text": "Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity (\"alpha matte\") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper, we present a closed-form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors and show that in the resulting expression, it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed-form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high-quality mattes for natural images may be obtained from a small amount of user input.",
"title": ""
},
{
"docid": "4ec74a91814f1e63aace2ac43b236b9a",
"text": "This paper discusses the status of research on detection of fraud undertaken as part of the European Commission-funded ACTS ASPeCT (Advanced Security for Personal Communications Technologies) project. A first task has been the identification of possible fraud scenarios and of typical fraud indicators which can be mapped to data in toll tickets. Currently, the project is exploring the detection of fraudulent behaviour based on a combination of absolute and differential usage. Three approaches are being investigated: a rule-based approach, an approach based on neural network, where both supervised and unsupervised learning are considered. Special attention is being paid to the feasibility of the implementations.",
"title": ""
},
{
"docid": "55cfcee1d1e83600ad88a1faef13f684",
"text": "In spite of amazing progress in food supply and nutritional science, and a striking increase in life expectancy of approximately 2.5 months per year in many countries during the previous 150 years, modern nutritional research has a great potential of still contributing to improved health for future generations, granted that the revolutions in molecular and systems technologies are applied to nutritional questions. Descriptive and mechanistic studies using state of the art epidemiology, food intake registration, genomics with single nucleotide polymorphisms (SNPs) and epigenomics, transcriptomics, proteomics, metabolomics, advanced biostatistics, imaging, calorimetry, cell biology, challenge tests (meals, exercise, etc.), and integration of all data by systems biology, will provide insight on a much higher level than today in a field we may name molecular nutrition research. To take advantage of all the new technologies scientists should develop international collaboration and gather data in large open access databases like the suggested Nutritional Phenotype database (dbNP). This collaboration will promote standardization of procedures (SOP), and provide a possibility to use collected data in future research projects. The ultimate goals of future nutritional research are to understand the detailed mechanisms of action for how nutrients/foods interact with the body and thereby enhance health and treat diet-related diseases.",
"title": ""
},
{
"docid": "47e0b0fad94270b705d013364a6932e4",
"text": "This paper introduces for the first time a novel flexible magnetic composite material for RF identification (RFID) and wearable RF antennas. First, one conformal RFID tag working at 480 MHz is designed and fabricated as a benchmarking prototype and the miniaturization concept is verified. Then, the impact of the material is thoroughly investigated using a hybrid method involving electromagnetic and statistical tools. Two separate statistical experiments are performed, one for the analysis of the impact of the relative permittivity and permeability of the proposed material and the other for the evaluation of the impact of the dielectric and magnetic loss on the antenna performance. Finally, the effect of the bending of the antenna is investigated, both on the S-parameters and on the radiation pattern. The successful implementation of the flexible magnetic composite material enables the significant miniaturization of RF passives and antennas in UHF frequency bands, especially when conformal modules that can be easily fine-tuned are required in critical biomedical and pharmaceutical applications.",
"title": ""
},
{
"docid": "2542d745b0ed5c3501db4aaf8e3cc528",
"text": "We present discriminative Gaifman models, a novel family of relational machine learning models. Gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases (KBs). Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing. Gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world KBs. We present the core ideas of Gaifman models and apply them to large-scale relational learning problems. We also discuss the ways in which Gaifman models relate to some existing relational machine learning approaches.",
"title": ""
},
{
"docid": "8c2adc6112d3eedc8175a61555496760",
"text": "What does a user do when he logs in to the Twitter website? Does he merely browse through the tweets of all his friends as a source of information for his own tweets, or does he simply tweet a message of his own personal interest? Does he skim through the tweets of all his friends or only of a selected few? A number of factors might influence a user in these decisions. Does this social influence vary across cultures? In our work, we propose a simple yet effective model to predict the behavior of a user - in terms of which hashtag or named entity he might include in his future tweets. We have approached the problem as a classification task with the various influences contributing as features. Further, we analyze the contribution of the weights of the different features. Using our model we analyze data from different cultures and discover interesting differences in social influence.",
"title": ""
},
{
"docid": "731c5544759a958272e08f928bd364eb",
"text": "A key method of reducing morbidity and mortality is childhood immunization, yet in 2003 only 69% of Filipino children received all suggested vaccinations. Data from the 2003 Philippines Demographic Health Survey were used to identify risk factors for non- and partial-immunization. Results of the multinomial logistic regression analyses indicate that mothers who have less education, and who have not attended the minimally-recommended four antenatal visits are less likely to have fully immunized children. To increase immunization coverage in the Philippines, knowledge transfer to mothers must improve.",
"title": ""
},
{
"docid": "36f928b473faf1e8751abbcbd61acdcd",
"text": "Normal operations of the neocortex depend critically on several types of inhibitory interneurons, but the specific function of each type is unknown. One possibility is that interneurons are differentially engaged by patterns of activity that vary in frequency and timing. To explore this, we studied the strength and short-term dynamics of chemical synapses interconnecting local excitatory neurons (regular-spiking, or RS, cells) with two types of inhibitory interneurons: fast-spiking (FS) cells, and low-threshold spiking (LTS) cells of layer 4 in the rat barrel cortex. We also tested two other pathways onto the interneurons: thalamocortical connections and recurrent collaterals from corticothalamic projection neurons of layer 6. The excitatory and inhibitory synapses interconnecting RS cells and FS cells were highly reliable in response to single stimuli and displayed strong short-term depression. In contrast, excitatory and inhibitory synapses interconnecting the RS and LTS cells were less reliable when initially activated. Excitatory synapses from RS cells onto LTS cells showed dramatic short-term facilitation, whereas inhibitory synapses made by LTS cells onto RS cells facilitated modestly or slightly depressed. Thalamocortical inputs strongly excited both RS and FS cells but rarely and only weakly contacted LTS cells. Both types of interneurons were strongly excited by facilitating synapses from axon collaterals of corticothalamic neurons. We conclude that there are two parallel but dynamically distinct systems of synaptic inhibition in layer 4 of neocortex, each defined by its intrinsic spiking properties, the short-term plasticity of its chemical synapses, and (as shown previously) an exclusive set of electrical synapses. Because of their unique dynamic properties, each inhibitory network will be recruited by different temporal patterns of cortical activity.",
"title": ""
},
{
"docid": "13452d0ceb4dfd059f1b48dba6bf5468",
"text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6afb6140edbfdabb2f2c1a0cbee23665",
"text": "The advent of Web 2.0 has led to an increase in the amount of sentimental content available in the Web. Such content is often found in social media web sites in the form of movie or product reviews, user comments, testimonials, messages in discussion forums etc. Timely discovery of the sentimental or opinionated web content has a number of advantages, the most important of all being monetization. Understanding of the sentiments of human masses towards different entities and products enables better services for contextual advertisements, recommendation systems and analysis of market trends. The focus of our project is sentiment focussed web crawling framework to facilitate the quick discovery of sentimental contents of movie reviews and hotel reviews and analysis of the same. We use statistical methods to capture elements of subjective style and the sentence polarity. The paper elaborately discusses two supervised machine learning algorithms: K-Nearest Neighbour(KNN) and Naïve Bayes‘ and compares their overall accuracy, precisions as well as recall values. It was seen that in case of movie reviews Naïve Bayes‘ gave far better results than K-NN but for hotel reviews these algorithms gave lesser, almost same accuracies.",
"title": ""
},
{
"docid": "ced0328f339248158e8414c3315330c5",
"text": "Novel inline coplanar-waveguide (CPW) bandpass filters composed of quarter-wavelength stepped-impedance resonators are proposed, using loaded air-bridge enhanced capacitors and broadside-coupled microstrip-to-CPW transition structures for both wideband spurious suppression and size miniaturization. First, by suitably designing the loaded capacitor implemented by enhancing the air bridges printed over the CPW structure and the resonator parameters, the lower order spurious passbands of the proposed filter may effectively be suppressed. Next, by adopting the broadside-coupled microstrip-to-CPW transitions as the fed structures to provide required input and output coupling capacitances and high attenuation level in the upper stopband, the filter with suppressed higher order spurious responses may be achieved. In this study, two second- and fourth-order inline bandpass filters with wide rejection band are implemented and thoughtfully examined. Specifically, the proposed second-order filter has its stopband extended up to 13.3f 0, where f0 stands for the passband center frequency, and the fourth-order filter even possesses better stopband up to 19.04f0 with a satisfactory rejection greater than 30 dB",
"title": ""
},
{
"docid": "aafda1cab832f1fe92ce406676e3760f",
"text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.",
"title": ""
},
{
"docid": "8b6d3b5fb8af809619119ee0f75cb3c6",
"text": "This paper mainly discusses how to use histogram projection and LBDM (Learning Based Digital Matting) to extract a tongue from a medical image, which is one of the most important steps in diagnosis of traditional Chinese Medicine. We firstly present an effective method to locate the tongue body, getting the convinced foreground and background area in form of trimap. Then, use this trimap as the input for LBDM algorithm to implement the final segmentation. Experiment was carried out to evaluate the proposed scheme, using 480 samples of pictures with tongue, the results of which were compared with the corresponding ground truth. Experimental results and analysis demonstrated the feasibility and effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "268ccb986855aabafa9de8f95668e7c4",
"text": "This paper investigates the performance of South Africa’s commercial banking sector for the period 20052009. Financial ratios are employed to measure the profitability, liquidity and credit quality performance of five large South African based commercial banks. The study found that overall bank performance increased considerably in the first two years of the analysis. A significant change in trend is noticed at the onset of the global financial crisis in 2007, reaching its peak during 2008-2009. This resulted in falling profitability, low liquidity and deteriorating credit quality in the South African Banking sector.",
"title": ""
},
{
"docid": "686045e2dae16aba16c26b8ccd499731",
"text": "It has been argued that platform technology owners cocreate business value with other firms in their platform ecosystems by encouraging complementary invention and exploiting indirect network effects. In this study, we examine whether participation in an ecosystem partnership improves the business performance of small independent software vendors (ISVs) in the enterprise software industry and how appropriability mechanisms influence the benefits of partnership. By analyzing the partnering activities and performance indicators of a sample of 1,210 small ISVs over the period 1996–2004, we find that joining a major platform owner’s platform ecosystem is associated with an increase in sales and a greater likelihood of issuing an initial public offering (IPO). In addition, we show that these impacts are greater when ISVs have greater intellectual property rights or stronger downstream capabilities. This research highlights the value of interoperability between software products, and stresses that value cocreation and appropriation are not mutually exclusive strategies in interfirm collaboration.",
"title": ""
}
] |
scidocsrr
|
91397d2975dc5c569dd936f71b13ba8a
|
Risks and Resilience of Collaborative Networks
|
[
{
"docid": "d12ba2f4c25bb7555475ac9fc6550df8",
"text": "Supply networks are composed of large numbers of firms from multiple interrelated industries. Such networks are subject to shifting strategies and objectives within a dynamic environment. In recent years, when faced with a dynamic environment, several disciplines have adopted the Complex Adaptive System (CAS) perspective to gain insights into important issues within their domains of study. Research investigations in the field of supply networks have also begun examining the merits of complexity theory and the CAS perspective. In this article, we bring the applicability of complexity theory and CAS into sharper focus, highlighting its potential for integrating existing supply chain management (SCM) research into a structured body of knowledge while also providing a framework for generating, validating, and refining new theories relevant to real-world supply networks. We suggest several potential research questions to emphasize how a ∗We sincerely thank Professors Thomas Choi (Arizona State University), David Dilts (Vanderbilt University), and Kevin Dooley (Arizona State University) for their help, guidance, and support. †Corresponding author.",
"title": ""
}
] |
[
{
"docid": "f9e67768e59ba9c4048be2b78f3d2823",
"text": "Ontologies are a widely accepted tool for the modeling of context information. We view the identification of the benefits and challenges of ontologybased models to be an important next step to further improve the usability of ontologies in context-aware applications. We outline a set of criteria with respect to ontology engineering and context modeling and discuss some recent achievements in the area of ontology-based context modeling in order to determine the important next steps necessary to fully exploit ontologies in pervasive computing.",
"title": ""
},
{
"docid": "3a855c3c3329ff63037711e8d17249e3",
"text": "In this work, we present an adaptation of the sequence-tosequence model for structured vision tasks. In this model, the output variables for a given input are predicted sequentially using neural networks. The prediction for each output variable depends not only on the input but also on the previously predicted output variables. The model is applied to spatial localization tasks and uses convolutional neural networks (CNNs) for processing input images and a multi-scale deconvolutional architecture for making spatial predictions at each step. We explore the impact of weight sharing with a recurrent connection matrix between consecutive predictions, and compare it to a formulation where these weights are not tied. Untied weights are particularly suited for problems with a fixed sized structure, where different classes of output are predicted at different steps. We show that chain models achieve top performing results on human pose estimation from images and videos.",
"title": ""
},
{
"docid": "f6d3157155868f5fafe2533dfd8768b8",
"text": "Over the past few years, the task of conceiving effective attacks to complex networks has arisen as an optimization problem. Attacks are modelled as the process of removing a number k of vertices, from the graph that represents the network, and the goal is to maximise or minimise the value of a predefined metric over the graph. In this work, we present an optimization problem that concerns the selection of nodes to be removed to minimise the maximum betweenness centrality value of the residual graph. This metric evaluates the participation of the nodes in the communications through the shortest paths of the network. To address the problem we propose an artificial bee colony algorithm, which is a swarm intelligence approach inspired in the foraging behaviour of honeybees. In this framework, bees produce new candidate solutions for the problem by exploring the vicinity of previous ones, called food sources. The proposed method exploits useful problem knowledge in this neighbourhood exploration by considering the partial destruction and heuristic reconstruction of selected solutions. The performance of the method, with respect to other models from the literature that can be adapted to face this problem, such as sequential centrality-based attacks, module-based attacks, a genetic algorithm, a simulated annealing approach, and a variable neighbourhood search, is empirically shown. E-mail addresses: [email protected] (M. Lozano), [email protected] (C. GarćıaMart́ınez), [email protected] (F.J. Rodŕıguez), [email protected] (H.M. Trujillo). Preprint submitted to Information Sciences August 17, 2016 *Manuscript (including abstract) Click here to view linked References",
"title": ""
},
{
"docid": "b06844c98f1b46e6d3bd583aacd76015",
"text": "The task of network management and monitoring relies on an accurate characterization of network traffic generated by different applications and network protocols. We employ three supervisedmachine learning (ML) algorithms, BayesianNetworks, Decision Trees and Multilayer Perceptrons for the flow-based classification of six different types of Internet traffic including peer-to-peer (P2P) and content delivery (Akamai) traffic. The dependency of the traffic classification performance on the amount and composition of training data is investigated followed by experiments that show that ML algorithms such as Bayesian Networks and Decision Trees are suitable for Internet traffic flow classification at a high speed, and prove to be robust with respect to applications that dynamically change their source ports. Finally, the importance of correctly classified training instances is highlighted by an experiment that is conducted with wrongly labeled training data. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e2d83db54bc0eacfb3b562c38125fc28",
"text": "Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.",
"title": ""
},
{
"docid": "eab052e8172c62fec9b532400fe5eeb6",
"text": "An overview on state of the art automotive radar usage is presented and the changing requirements from detection and ranging towards radar based environmental understanding for highly automated and autonomous driving deduced. The traditional segmentation in driving, manoeuvering and parking tasks vanishes at the driver less stage. Situation assessment and trajectory/manoeuver planning need to operate in a more thorough way. Hence, fast situational up-date, motion prediction of all kind of dynamic objects, object dimension, ego-motion estimation, (self)-localisation and more semantic/classification information, which allows to put static and dynamic world into correlation/context with each other is mandatory. All these are new areas for radar signal processing and needs revolutionary new solutions. The article outlines the benefits that make radar essential for autonomous driving and presents recent approaches in radar based environmental perception.",
"title": ""
},
{
"docid": "0dc1bf3422e69283a93d0dd87caeb84f",
"text": "Organizations are increasingly recognizing that user satisfaction with information systems is one of the most important determinants of the success of those systems. However, current satisfaction measures involve an intrusion into the users' worlds, and are frequently deemed to be too cumbersome to be justi®ed ®nancially and practically. This paper describes a methodology designed to solve this contemporary problem. Based on theory which suggests that behavioral observations can be used to measure satisfaction, system usage statistics from an information system were captured around the clock for 6 months to determine users' satisfaction with the system. A traditional satisfaction evaluation instrument, a validated survey, was applied in parallel, to verify that the analysis of the behavioral data yielded similar results. The ®nal results were analyzed statistically to demonstrate that behavioral analysis is a viable alternative to the survey in satisfaction measurement. # 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "83cc283967bf6bc7f04729a5e08660e2",
"text": "Logicians have, by and large, engaged in the convenient fiction that sentences of natural languages (at least declarative sentences) are either true or false or, at worst, lack a truth value, or have a third value often interpreted as 'nonsense'. And most contemporary linguists who have thought seriously about semantics, especially formal semantics, have largely shared this fiction, primarily for lack of a sensible alternative. Yet students o f language, especially psychologists and linguistic philosophers, have long been attuned to the fact that natural language concepts have vague boundaries and fuzzy edges and that, consequently, natural language sentences will very often be neither true, nor false, nor nonsensical, but rather true to a certain extent and false to a certain extent, true in certain respects and false in other respects. It is common for logicians to give truth conditions for predicates in terms of classical set theory. 'John is tall' (or 'TALL(j) ' ) is defined to be true just in case the individual denoted by 'John' (or ' j ') is in the set of tall men. Putting aside the problem that tallness is really a relative concept (tallness for a pygmy and tallness for a basketball player are obviously different) 1, suppose we fix a population relative to which we want to define tallness. In contemporary America, how tall do you have to be to be tall? 5'8\"? 5'9\"? 5'10\"? 5'11\"? 6'? 6'2\"? Obviously there is no single fixed answer. How old do you have to be to be middle-aged? 35? 37? 39? 40? 42? 45? 50? Again the concept is fuzzy. Clearly any attempt to limit truth conditions for natural language sentences to true, false and \"nonsense' will distort the natural language concepts by portraying them as having sharply defined rather than fuzzily defined boundaries. Work dealing with such questions has been done in psychology. To take a recent example, Eleanor Rosch Heider (1971) took up the question of whether people perceive category membership as a clearcut issue or a matter of degree. For example, do people think of members of a given",
"title": ""
},
{
"docid": "46bee248655c79a0364fee437bc43eaf",
"text": "Parkinson disease (PD) is a universal public health problem of massive measurement. Machine learning based method is used to classify between healthy people and people with Parkinson’s disease (PD). This paper presents a comprehensive review for the prediction of Parkinson disease buy using machine learning based approaches. The brief introduction of various computational intelligence techniques based approaches used for the prediction of Parkinson diseases are presented .This paper also presents the summary of results obtained by various researchers available in literature to predict the Parkinson diseases. Keywords— Parkinson’s disease, classification, random forest, support vector machine, machine learning, signal processing, artificial neural network.",
"title": ""
},
{
"docid": "140d6d345aa6d486a30e596dde25a8ef",
"text": "This research uses the absorptive capacity (ACAP) concept as a theoretical lens to study the effect of e-business upon the competitive performance of SMEs, addressing the following research issue: To what extent are manufacturing SMEs successful in developing their potential and realized ACAP in line with their entrepreneurial orientation? A survey study of 588 manufacturing SMEs found that their e-business capabilities, considered as knowledge acquisition and assimilation capabilities have an indirect effect on their competitive performance that is mediated by their knowledge transformation and exploitation capabilities, and insofar as these capabilities are developed as a result of a more entrepreneurial orientation on their part. Finally, the effect of this orientation on the SMEs' competitive performance appears to be totally mediated by their ACAP.",
"title": ""
},
{
"docid": "0fc0816d62a8d13c3e415b5a1ae7e1d4",
"text": "The rapid pace of business process change, partially fueled by information technology, is placing increasingly difficult demands on the organization. In many industries, organizations are required to evaluate and assess new information technologies and their organization-specific strategic potential, in order to remain competitive. The scanning, adoption and diffusion of this information technology must be carefully guided by strong strategic and technological leadership in order to infuse the organization and its members with strategic and technological visions, and to coordinate their diverse and decentralized expertise. This view of technological diffusion requires us to look beyond individuals and individual adoption, toward other levels of analysis and social theoretical viewpoints to promote the appropriate and heedful diffusion of often organization-wide information technologies. Particularly important is an examination of the diffusion champions and how a feasible and shared vision of the business and information technology can be created and communicated across organizational communities in order to unify, motivate and mobilize technology change process. The feasibility of this shared vision depends on its strategic fit and whether the shared vision is properly aligned with organizational objectives in order to filter and shape technological choice and diffusion. Shared vision is viewed as an organizational barometer for assessing the appropriateness of future technologies amidst a sea of overwhelming possibilities. We present a theoretical model to address an extended program of research focusing on important phases during diffusion, shared vision, change management and social alignment. We also make a call for further research into these theoretical linkages and into the development of feasible shared visions.",
"title": ""
},
{
"docid": "b91f54fd70da385625d9df127834d8c7",
"text": "This commentary was stimulated by Yeping Li’s first editorial (2014) citing one of the journal’s goals as adding multidisciplinary perspectives to current studies of single disciplines comprising the focus of other journals. In this commentary, I argue for a greater focus on STEM integration, with a more equitable representation of the four disciplines in studies purporting to advance STEM learning. The STEM acronym is often used in reference to just one of the disciplines, commonly science. Although the integration of STEM disciplines is increasingly advocated in the literature, studies that address multiple disciplines appear scant with mixed findings and inadequate directions for STEM advancement. Perspectives on how discipline integration can be achieved are varied, with reference to multidisciplinary, interdisciplinary, and transdisciplinary approaches adding to the debates. Such approaches include core concepts and skills being taught separately in each discipline but housed within a common theme; the introduction of closely linked concepts and skills from two or more disciplines with the aim of deepening understanding and skills; and the adoption of a transdisciplinary approach, where knowledge and skills from two or more disciplines are applied to real-world problems and projects with the aim of shaping the total learning experience. Research that targets STEM integration is an embryonic field with respect to advancing curriculum development and various student outcomes. For example, we still need more studies on how student learning outcomes arise not only from different forms of STEM integration but also from the particular disciplines that are being integrated. As noted in this commentary, it seems that mathematics learning benefits less than the other disciplines in programs claiming to focus on STEM integration. Factors contributing to this finding warrant more scrutiny. Likewise, learning outcomes for engineering within K-12 integrated STEM programs appear under-researched. This commentary advocates a greater focus on these two disciplines within integrated STEM education research. Drawing on recommendations from the literature, suggestions are offered for addressing the challenges of integrating multiple disciplines faced by the STEM community.",
"title": ""
},
{
"docid": "31305b698f82e902a5829abc2f272d5f",
"text": "It is now recognized that the Consensus problem is a fundamental problem when one has to design and implement reliable asynchronous distributed systems. This chapter is on the Consensus problem. It studies Consensus in two failure models, namely, the Crash/no Recovery model and the Crash/Recovery model. The assumptions related to the detection of failures that are required to solve Consensus in a given model are particularly emphasized.",
"title": ""
},
{
"docid": "2a9d399edc3c2dcc153d966760f38d80",
"text": "Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is over a computer network and the other is on a shared memory system. We establish an ergodic convergence rate O(1/ √ K) for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by √ K (K is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.",
"title": ""
},
{
"docid": "ed0465dc58b0f9c62e729fed4054bb58",
"text": "In this study, an instructional design model was employed for restructuring a teacher education course with technology. The model was applied in a science education method course, which was offered in two different but consecutive semesters with a total enrollment of 111 students in the fall semester and 116 students in the spring semester. Using tools, such as multimedia authoring tools in the fall semester and modeling software in the spring semester, teacher educators designed high quality technology-infused lessons for science and, thereafter, modeled them in classroom for preservice teachers. An assessment instrument was constructed to assess preservice teachers technology competency, which was measured in terms of four aspects, namely, (a) selection of appropriate science topics to be taught with technology, (b) use of appropriate technology-supported representations and transformations for science content, (c) use of technology to support teaching strategies, and (d) integration of computer activities with appropriate inquiry-based pedagogy in the science classroom. The results of a MANOVA showed that preservice teachers in the Modeling group outperformed preservice teachers overall performance in the Multimedia group, F = 21.534, p = 0.000. More specifically, the Modeling group outperformed the Multimedia group on only two of the four aspects of technology competency, namely, use of technology to support teaching strategies and integration of computer activities with appropriate pedagogy in the classroom, F = 59.893, p = 0.000, and F = 10.943, p = 0.001 respectively. The results indicate that the task of preparing preservice teachers to become technology competent is difficult and requires many efforts for providing them with ample of 0360-1315/$ see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2004.06.002 * Tel.: +357 22 753772; fax: +357 22 377950. E-mail address: [email protected]. 384 C. Angeli / Computers & Education 45 (2005) 383–398 opportunities during their education to develop the competencies needed to be able to teach with technology. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2bdc4df73912f4f2be4436e1fdd16d69",
"text": "Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological data set to a feature-based multiclass classification. In order to collect a physiological data set from multiple subjects over many weeks, we used a musical induction method that spontaneously leads subjects to real emotional states, without any deliberate laboratory setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity, and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, and positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. An improved recognition accuracy of 95 percent and 70 percent for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.",
"title": ""
},
{
"docid": "2b4a2165cebff8326f97cab3063e1a62",
"text": "Pneumatic artificial muscles (PAMs) are becoming more commonly used as actuators in modern robotics. The most made and common type of these artificial muscles in use is the McKibben artificial muscle that was developed in 1950’s. This paper presents the geometric model of PAM and different Matlab models for pneumatic artificial muscles. The aim of our models is to relate the pressure and length of the pneumatic artificial muscles to the force it exerts along its entire exists.",
"title": ""
},
{
"docid": "f9806d3542f575d53ef27620e4aa493b",
"text": "Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic.",
"title": ""
},
{
"docid": "68b25c8eefc5e2045065b0cf24652245",
"text": "A backscatter-based microwave imaging technique that compensates for frequency-dependent propagation effects is proposed for detecting early-stage breast cancer. An array of antennas is located near the surface of the breast and an ultrawideband pulse is transmitted sequentially from each antenna. The received backscattered signals are passed through a space-time beamformer that is designed to image backscattered signal energy as a function of location. As a consequence of the significant dielectric-properties contrast between normal and malignant tissue, locations corresponding to malignant tumors are associated with large energy levels in the image. The effectiveness of these algorithms is demonstrated using simulated backscattered signals obtained from an anatomically realistic MRI-derived computational electromagnetic breast model. Very small (2 mm) malignant tumors embedded within the complex fibroglandular structure of the breast are easily detected above the background clutter.",
"title": ""
}
] |
scidocsrr
|
350989ffb1a5cb279bcdf304778ade77
|
Representation Properties of Networks: Kolmogorov's Theorem Is Irrelevant
|
[
{
"docid": "fcb9614925e939898af060b9ee52f357",
"text": "The authors present a method for constructing a feedforward neural net implementing an arbitrarily good approximation to any L/sub 2/ function over (-1, 1)/sup n/. The net uses n input nodes, a single hidden layer whose width is determined by the function to be implemented and the allowable mean square error, and a linear output neuron. Error bounds and an example are given for the method.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "06f99b18bae3f15e77db8ff2d8c159cc",
"text": "The exact nature of the relationship among species range sizes, speciation, and extinction events is not well understood. The factors that promote larger ranges, such as broad niche widths and high dispersal abilities, could increase the likelihood of encountering new habitats but also prevent local adaptation due to high gene flow. Similarly, low dispersal abilities or narrower niche widths could cause populations to be isolated, but such populations may lack advantageous mutations due to low population sizes. Here we present a large-scale, spatially explicit, individual-based model addressing the relationships between species ranges, speciation, and extinction. We followed the evolutionary dynamics of hundreds of thousands of diploid individuals for 200,000 generations. Individuals adapted to multiple resources and formed ecological species in a multidimensional trait space. These species varied in niche widths, and we observed the coexistence of generalists and specialists on a few resources. Our model shows that species ranges correlate with dispersal abilities but do not change with the strength of fitness trade-offs; however, high dispersal abilities and low resource utilization costs, which favored broad niche widths, have a strong negative effect on speciation rates. An unexpected result of our model is the strong effect of underlying resource distributions on speciation: in highly fragmented landscapes, speciation rates are reduced.",
"title": ""
},
{
"docid": "fca5eb1b215c4912d5b439ae67269a23",
"text": "We have developed a computer software package, IMOD, as a tool for analyzing and viewing three-dimensional biological image data. IMOD is useful for studying and modeling data from tomographic, serial section, and optical section reconstructions. The software allows image data to be visualized by several different methods. Models of the image data can be visualized by volume or contour surface rendering and can yield quantitative information.",
"title": ""
},
{
"docid": "690544595e0fa2e5f1c40e3187598263",
"text": "In this paper, a methodology is presented and employed for simulating the Internet of Things (IoT). The requirement for scalability, due to the possibly huge amount of involved sensors and devices, and the heterogeneous scenarios that might occur, impose resorting to sophisticated modeling and simulation techniques. In particular, multi-level simulation is regarded as a main framework that allows simulating large-scale IoT environments while keeping high levels of detail, when it is needed. We consider a use case based on the deployment of smart services in decentralized territories. A two level simulator is employed, which is based on a coarse agent-based, adaptive parallel and distributed simulation approach to model the general life of simulated entities. However, when needed a finer grained simulator (based on OMNeT++) is triggered on a restricted portion of the simulated area, which allows considering all issues concerned with wireless communications. Based on this use case, it is confirmed that the ad-hoc wireless networking technologies do represent a principle tool to deploy smart services over decentralized countrysides. Moreover, the performance evaluation confirms the viability of utilizing multi-level simulation for simulating large scale IoT environments.",
"title": ""
},
{
"docid": "2b0cc3aa68c671c7c14726b51e1713ca",
"text": "The conflux of two growing areas of technology— collaboration and visualization—into a new research direction, collaborative visualization, provides new research challenges. Technology now allows us to easily connect and collaborate with one another—in settings as diverse as over networked computers, across mobile devices, or using shared displays such as interactive walls and tabletop surfaces. Digital information is now regularly accessed by multiple people in order to share information, to view it together, to analyze it, or to form decisions. Visualizations are used to deal more effectively with large amounts of information while interactive visualizations allow users to explore the underlying data. While researchers face many challenges in collaboration and in visualization, the emergence of collaborative visualization poses additional challenges but is also an exciting opportunity to reach new audiences and applications for visualization tools and techniques. The purpose of this article is (1) to provide a definition, clear scope, and overview of the evolving field of collaborative visualization, (2) to help pinpoint the unique focus of collaborative visualization with its specific aspects, challenges, and requirements within the intersection of general computer-supported cooperative work (CSCW) and visualization research, and (3) to draw attention to important future research questions to be addressed by the community. We conclude by discussing a research agenda for future work on collaborative visualization and urge for a new generation of visualization tools that are designed with collaboration in mind from their very inception.",
"title": ""
},
{
"docid": "feaa54ff80bac29319a33de7b252827a",
"text": "Feedback is assessing an individual's action in any endeavor. The judgment helps one to grow well in any field. By the feedback a student can understand and improve upon mistakes committed, teachers come to know about the student's capability and implement new teaching methods. New Technologies also come up for the enhancement of Student's Performance. A study of the assessment of student performance through various papers using data mining and also with ontology based applications makes one decide certain factors like confidence level, stress and time management, holistic approach towards an issue which may be useful in giving a prediction about the students' work performance level in organizations. The Survey encompasses the assessment of a student's performance in academics using Data mining Techniques and also with Ontology based Applications.",
"title": ""
},
{
"docid": "79d1d44fea2780cfbe1ae178cc456d06",
"text": "PURPOSE\nTo assess rates of burnout among US radiation oncology residents and evaluate program/resident factors associated with burnout.\n\n\nMETHODS AND MATERIALS\nA nationwide survey was distributed to residents in all US radiation oncology programs. The survey included the Maslach Burnout Index-Human Services Survey (MBI-HSS) as well as demographic and program-specific questions tailored to radiation oncology residents. Primary endpoints included rates of emotional exhaustion, depersonalization, and personal accomplishment from MBI-HSS subscale scores. Binomial logistic models determined associations between various residency/resident characteristics and high burnout levels.\n\n\nRESULTS\nOverall, 232 of 733 residents (31.2%) responded, with 205 of 733 (27.9%) completing the MBI-HSS. High levels of emotional exhaustion and depersonalization were reported in 28.3% and 17.1%, respectively; 33.1% experienced a high burnout level on at least 1 of these 2 MBI-HSS subscales. Low rates of personal accomplishment occurred in 12% of residents. Twelve residents (5.9%) reported feeling \"at the end of my rope\" on a weekly basis or more. On multivariable analysis there was a statistically significant inverse association between perceived adequacy of work-life balance (odds ratio 0.37; 95% confidence interval 0.17-0.83) and burnout.\n\n\nCONCLUSIONS\nApproximately one-third of radiation oncology residents have high levels of burnout symptoms, consistent with previous oncology literature, but lower levels than those among physicians and residents of other specialties. Particularly concerning was that more than 1 in 20 felt \"at the end of my rope\" on a weekly basis or more. Targeted interventions to identify symptoms of burnout among radiation oncology residents may help to prevent the negative downstream consequences of this syndrome.",
"title": ""
},
{
"docid": "31add593ce5597c24666d9662b3db89d",
"text": "Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of dressed human body scans. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2, 18] as statistical model.",
"title": ""
},
{
"docid": "dfaa6e183e70cbacc5c9de501993b7af",
"text": "Traditional buildings consume more of the energy resources than necessary and generate a variety of emissions and waste. The solution to overcoming these problems will be to build them green and smart. One of the significant components in the concept of smart green buildings is using renewable energy. Solar energy and wind energy are intermittent sources of energy, so these sources have to be combined with other sources of energy or storage devices. While batteries and/or supercapacitors are an ideal choice for short-term energy storage, regenerative hydrogen-oxygen fuel cells are a promising candidate for long-term energy storage. This paper is to design and test a green building energy system that consists of renewable energy, energy storage, and energy management. The paper presents the architecture of the proposed green building energy system and a simulation model that allows for the study of advanced control strategies for the green building energy system. An example green building energy system is tested and simulation results show that the variety of energy source and storage devices can be managed very well.",
"title": ""
},
{
"docid": "242b631b60b3abf5646c7d191477adbe",
"text": "■ Abstract We highlight the complexity of land-use/cover change and propose a framework for a more general understanding of the issue, with emphasis on tropical regions. The review summarizes recent estimates on changes in cropland, agricultural intensification, tropical deforestation, pasture expansion, and urbanization and identifies the still unmeasured land-cover changes. Climate-driven land-cover modifications interact with land-use changes. Land-use change is driven by synergetic factor combinations of resource scarcity leading to an increase in the pressure of production on resources, changing opportunities created by markets, outside policy intervention, loss of adaptive capacity, and changes in social organization and attitudes. The changes in ecosystem goods and services that result from land-use change feed back on the drivers of land-use change. A restricted set of dominant pathways of land-use change is identified. Land-use change can be understood using the concepts of complex adaptive systems and transitions. Integrated, place-based research on land-use/land-cover change requires a combination of the agent-based systems and narrative perspectives of understanding. We argue in this paper that a systematic analysis of local-scale land-use change studies, conducted over a range of timescales, helps to uncover general principles that provide an explanation and prediction of new land-use changes.",
"title": ""
},
{
"docid": "6bcedbceeda2e995044b21363bd95180",
"text": "The orbitofrontal cortex represents the reward or affective value of primary reinforcers including taste, touch, texture, and face expression. It learns to associate other stimuli with these to produce representations of the expected reward value for visual, auditory, and abstract stimuli including monetary reward value. The orbitofrontal cortex thus plays a key role in emotion, by representing the reward value of the goals for action. The learning process is stimulus-reinforcer association learning. Negative reward prediction error neurons are related to this affective learning. Activations in the orbitofrontal cortex correlate with the subjective emotional experience of affective stimuli, and damage to the orbitofrontal cortex impairs emotion-related learning, emotional behaviour, and subjective affective state. Top-down attention to affect modulates orbitofrontal cortex representations, and attention to intensity modulates representations in earlier cortical areas that represent the physical properties of stimuli. Top-down word-level cognitive inputs can bias affective representations in the orbitofrontal cortex, providing a mechanism for cognition to influence emotion. Whereas the orbitofrontal cortex provides a representation of reward or affective value on a continuous scale, areas beyond the orbitofrontal cortex such as the medial prefrontal cortex area 10 are involved in binary decision-making when a choice must be made. For this decision-making, the orbitofrontal cortex provides a representation of the value of each specific reward on the same scale, with no conversion to a common currency. Increased activity in a lateral orbitofrontal cortex non-reward area provides a new attractor-related approach to understanding and treating depression. Consistent with the theory, the lateral orbitofrontal cortex has increased functional connectivity in depression, and the medial orbitofrontal cortex, involved in reward, has decreased functional connectivity in depression.",
"title": ""
},
{
"docid": "98f75a69417bc3eb16d13e1dc39f1001",
"text": "This paper provides a comprehensive overview of critical developments in the field of multiple-input multiple-output (MIMO) wireless communication systems. The state of the art in single-user MIMO (SU-MIMO) and multiuser MIMO (MU-MIMO) communications is presented, highlighting the key aspects of these technologies. Both open-loop and closed-loop SU-MIMO systems are discussed in this paper with particular emphasis on the data rate maximization aspect of MIMO. A detailed review of various MU-MIMO uplink and downlink techniques then follows, clarifying the underlying concepts and emphasizing the importance of MU-MIMO in cellular communication systems. This paper also touches upon the topic of MU-MIMO capacity as well as the promising convex optimization approaches to MIMO system design.",
"title": ""
},
{
"docid": "d4af143e26b122f32697a4ac9973d748",
"text": "The Keivitsansarvi deposit, in northern Finland, is a low-grade dissemination of Ni–Cu sulfides containing 1.3–26.6 g/t PGE. It occurs in the northeastern part of the 2.05 Ga Keivitsa intrusion and is hosted by olivine wehrlite and olivine websterite, metamorphosed at greenschist-facies conditions. The sulfide-mineralized area shows variable bulk S, Ni, Co, Cu, PGE, Au, As, Sb, Se, Te and Bi contents. S and Au tend to decrease irregularly from bottom to top of the deposit, whereas Ni, Ni/Co, PGE, As, Sb, Se, Te and Bi tend to increase. Thus, the upper section of the deposit has low S (<1.5 wt.%) and Au (160 ppb on average), but elevated levels of the PGE (2120 ppb Pt, 1855 ppb Pd on average). Sulfides consist of intergranular, highly disseminated aggregates mainly made up of pentlandite, pyrite, and chalcopyrite (all showing fine intergrowths), as well as nickeline, maucherite and gersdorffite in some samples. Most platinum-group minerals occur as single, minute grains included in silicates (57%) or attached to the grain boundaries of sulfides (36%). Only a few PGM grains (6%) are included in sulfides. Pt minerals (mainly moncheite and sperrylite) are the most abundant PGM, whereas Pd minerals (mainly merenskyite, Pd-rich melonite, kotulskite and sobolevskite) are relatively scarce, and most contain significant amounts of Pt. Whole-rock PGE analyses show a general Pd enrichment with respect to Pt. This discrepancy results from the fact that a major part of Pd is hidden in solid solution in the structure of gersdorffite, nickeline, maucherite and pentlandite. The mineral assemblages and textures of the upper section of the Keivitsansarvi deposit result from the combined effect of serpentinization, hydrothermal alteration and metamorphism of preexisting, low-grade disseminated Ni–Cu ore formed by the intercumulus crystallization of a small fraction of immiscible sulfide melt. Serpentinization caused Ni enrichment of sulfides and preserved the original PGE concentrations of the magmatic mineralization. Later, coeval with greenschist-facies metamorphism, PGE and some As (together with other semimetals) were leached out from other mineralized zones by hydrothermal fluids, probably transported in the form of chloride complexes, and precipitated in discrete Ni–Cu–PGE-rich horizons, as observed in the upper part of the deposit. Metamorphism also caused partial dissolution and redistribution of the sulfide (and arsenide) aggregates, contributing to a further Ni enrichment in the sulfide ores.",
"title": ""
},
{
"docid": "1f43cd2c1e2befc95f0ada413bfa7d1e",
"text": "Mobile robot is an autonomous agent capable of navigating intelligently anywhere using sensor-actuator control techniques. The applications of the autonomous mobile robot in many fields such as industry, space, defence and transportation, and other social sectors are growing day by day. The mobile robot performs many tasks such as rescue operation, patrolling, disaster relief, planetary exploration, and material handling, etc. Therefore, an intelligent mobile robot is required that could travel autonomously in various static and dynamic environments. Several techniques have been applied by the various researchers for mobile robot navigation and obstacle avoidance. The present article focuses on the study of the intelligent navigation techniques, which are capable of navigating a mobile robot autonomously in static as well as dynamic environments.",
"title": ""
},
{
"docid": "f27547cfee95505fe8a2f44f845ddaed",
"text": "High-performance, two-dimensional arrays of parallel-addressed InGaN blue micro-light-emitting diodes (LEDs) with individual element diameters of 8, 12, and 20 /spl mu/m, respectively, and overall dimensions 490 /spl times/490 /spl mu/m, have been fabricated. In order to overcome the difficulty of interconnecting multiple device elements with sufficient step-height coverage for contact metallization, a novel scheme involving the etching of sloped-sidewalls has been developed. The devices have current-voltage (I-V) characteristics approaching those of broad-area reference LEDs fabricated from the same wafer, and give comparable (3-mW) light output in the forward direction to the reference LEDs, despite much lower active area. The external efficiencies of the micro-LED arrays improve as the dimensions of the individual elements are scaled down. This is attributed to scattering at the etched sidewalls of in-plane propagating photons into the forward direction.",
"title": ""
},
{
"docid": "298b65526920c7a094f009884439f3e4",
"text": "Big Data concerns massive, heterogeneous, autonomous sources with distributed and decentralized control. These characteristics make it an extreme challenge for organizations using traditional data management mechanism to store and process these huge datasets. It is required to define a new paradigm and re-evaluate current system to manage and process Big Data. In this paper, the important characteristics, issues and challenges related to Big Data management has been explored. Various open source Big Data analytics frameworks that deal with Big Data analytics workloads have been discussed. Comparative study between the given frameworks and suitability of the same has been proposed.",
"title": ""
},
{
"docid": "171b5d7c884cd934af602bf000451cb9",
"text": "Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving.",
"title": ""
},
{
"docid": "649118bb3927a2b6bbb924d838fdbac8",
"text": "Research has demonstrated that extensive structural and functional brain development continues throughout adolescence. A popular notion emerging from this work states that a relative immaturity in frontal cortical neural systems could explain adolescents' high rates of risk-taking, substance use and other dangerous behaviours. However, developmental neuroimaging studies do not support a simple model of frontal cortical immaturity. Rather, growing evidence points to the importance of changes in social and affective processing, which begin around the onset of puberty, as crucial to understanding these adolescent vulnerabilities. These changes in social–affective processing also may confer some adaptive advantages, such as greater flexibility in adjusting one's intrinsic motivations and goal priorities amidst changing social contexts in adolescence.",
"title": ""
},
{
"docid": "b35d34cca3cb50247dc030ff8b9c7ac7",
"text": "Article history: Available online 29 June 2015",
"title": ""
},
{
"docid": "170e7a72a160951e880f18295d100430",
"text": "In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are used to construct capsules in the first capsule layer. Capsule layers are connected via dynamic routing mechanism. The last capsule layer consists of only one capsule to produce a vector output. The length of this vector output is used to measure the plausibility of the triple. Our proposed CapsE obtains state-of-the-art link prediction results for knowledge graph completion on two benchmark datasets: WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17 dataset.",
"title": ""
},
{
"docid": "a9a65ee9ac1469b24e8900de01eb8b19",
"text": "The lung has significant susceptibility to injury from a variety of chemotherapeutic agents. The clinician must be familiar with classic chemotherapeutic agents with well-described pulmonary toxicities and must also be vigilant about a host of new agents that may exert adverse effects on lung function. The diagnosis of chemotherapy-associated lung disease remains an exclusionary process, particularly with respect to considering usual and atypical infections, as well as recurrence of the underlying neoplastic process in these immune compromised patients. In many instances, chemotherapy-associated lung disease may respond to withdrawal of the offending agent and to the judicious application of corticosteroid therapy.",
"title": ""
}
] |
scidocsrr
|
aaf3bdeab1b2a539cf00bbd7adcdd263
|
A Redundancy-Aware Sentence Regression Framework for Extractive Summarization
|
[
{
"docid": "73b3c2f34386d8ba642a61528d158d21",
"text": "Existing multi-document summarization systems usually rely on a specific summarization model (i.e., a summarization method with a specific parameter setting) to extract summaries for different document sets with different topics. However, according to our quantitative analysis, none of the existing summarization models can always produce high-quality summaries for different document sets, and even a summarization model with good overall performance may produce low-quality summaries for some document sets. On the contrary, a baseline summarization model may produce high-quality summaries for some document sets. Based on the above observations, we treat the summaries produced by different summarization models as candidate summaries, and then explore discriminative reranking techniques to identify high-quality summaries from the candidates for difference document sets. We propose to extract a set of candidate summaries for each document set based on an ILP framework, and then leverage Ranking SVM for summary reranking. Various useful features have been developed for the reranking process, including word-level features, sentence-level features and summary-level features. Evaluation results on the benchmark DUC datasets validate the efficacy and robustness of our proposed approach.",
"title": ""
},
{
"docid": "cfeb97a848766269c2088d8191206cc8",
"text": "We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.",
"title": ""
}
] |
[
{
"docid": "13774d2655f2f0ac575e11991eae0972",
"text": "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages.",
"title": ""
},
{
"docid": "187b77bfa5ac3110a7fd91ff17a1b456",
"text": "Educational games have enhanced the value of instruction procedures in institutions and business organizations. Factors that increase students’ adoption of learning games have been widely studied in past; however, the effect of these factors on learners’ performance is yet to be explored. In this study, factors of Enjoyment, Happiness, and Intention to Use were chosen as important attitudes in learning educational games and increasing learning performance. A two-step between group experiment was conducted: the first study compared game-based learning and traditional instruction in order to verify the value of the game. 41 Gymnasium (middle school) students were involved, and the control and experimental groups were formed based on a pretest method. The second study, involving 46 Gymnasium students, empirically evaluates whether and how certain attitudinal factors affect learners’ performance. The results of the two-part experiment showed that a) the game demonstrated good performance (as compared to traditional instruction) concerning the gain of knowledge, b) learners’ enjoyment of the game has a significant relation with their performance, and c) learners’ intention to use and happiness with the game do not have any relation with their performance. Our results suggest that there are attitudinal factors affecting knowledge acquisition gained by a game. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cf58d2d80764a5c8446d82b1b9499c00",
"text": "Estimation is a critical component of synchronization in wireless and signal processing systems. There is a rich body of work on estimator derivation, optimization, and statistical characterization from analytic system models which are used pervasively today. We explore an alternative approach to building estimators which relies principally on approximate regression using large datasets and large computationally efficient artificial neural network models capable of learning non-linear function mappings which provide compact and accurate estimates. For single carrier PSK modulation, we explore the accuracy and computational complexity of such estimators compared with the current gold-standard analytically derived alternatives. We compare performance in various wireless operating conditions and consider the trade offs between the two different classes of systems. Our results show the learned estimators can provide improvements in areas such as short-time estimation and estimation under non-trivial real world channel conditions such as fading or other non-linear hardware or propagation effects.",
"title": ""
},
{
"docid": "9b45bb1734e9afc34b14fa4bc47d8fba",
"text": "To achieve complex solutions in the rapidly changing world of e-commerce, it is impossible to go it alone. This explains the latest trend in IT outsourcing---global and partner-based alliances. But where do we go from here?",
"title": ""
},
{
"docid": "314e1b8bbcc0a5735d86bb751d524a93",
"text": "Ubiquinone (coenzyme Q), in addition to its function as an electron and proton carrier in mitochondrial and bacterial electron transport linked to ATP synthesis, acts in its reduced form (ubiquinol) as an antioxidant, preventing the initiation and/or propagation of lipid peroxidation in biological membranes and in serum low-density lipoprotein. The antioxidant activity of ubiquinol is independent of the effect of vitamin E, which acts as a chain-breaking antioxidant inhibiting the propagation of lipid peroxidation. In addition, ubiquinol can efficiently sustain the effect of vitamin E by regenerating the vitamin from the tocopheroxyl radical, which otherwise must rely on water-soluble agents such as ascorbate (vitamin C). Ubiquinol is the only known lipid-soluble antioxidant that animal cells can synthesize de novo, and for which there exist enzymic mechanisms that can regenerate the antioxidant from its oxidized form resulting from its inhibitory effect of lipid peroxidation. These features, together with its high degree of hydrophobicity and its widespread occurrence in biological membranes and in low-density lipoprotein, suggest an important role of ubiquinol in cellular defense against oxidative damage. Degenerative diseases and aging may bc 1 manifestations of a decreased capacity to maintain adequate ubiquinol levels.",
"title": ""
},
{
"docid": "b75847420d86f2dfd4d1e43b8f23d449",
"text": "Since the inception of Deep Reinforcement Learning (DRL) algorithms, there has been a growing interest in both research and industrial communities in the promising potentials of this paradigm. The list of current and envisioned applications of deep RL ranges from autonomous navigation and robotics to control applications in the critical infrastructure, air traffic control, defense technologies, and cybersecurity. While the landscape of opportunities and the advantages of deep RL algorithms are justifiably vast, the security risks and issues in such algorithms remain largely unexplored. To facilitate and motivate further research on these critical challenges, this paper presents a foundational treatment of the security problem in DRL. We formulate the security requirements of DRL, and provide a high-level threat model through the classification and identification of vulnerabilities, attack vectors, and adversarial capabilities. Furthermore, we present a review of current literature on security of deep RL from both offensive and defensive perspectives. Lastly, we enumerate critical research venues and open problems in mitigation and prevention of intentional attacks against deep RL as a roadmap for further research in this area.",
"title": ""
},
{
"docid": "1700ee1ba5fef2c9efa9a2b8bfa7d6bd",
"text": "This work studies resource allocation in a cloud market through the auction of Virtual Machine (VM) instances. It generalizes the existing literature by introducing combinatorial auctions of heterogeneous VMs, and models dynamic VM provisioning. Social welfare maximization under dynamic resource provisioning is proven NP-hard, and modeled with a linear integer program. An efficient α-approximation algorithm is designed, with α ~ 2.72 in typical scenarios. We then employ this algorithm as a building block for designing a randomized combinatorial auction that is computationally efficient, truthful in expectation, and guarantees the same social welfare approximation factor α. A key technique in the design is to utilize a pair of tailored primal and dual LPs for exploiting the underlying packing structure of the social welfare maximization problem, to decompose its fractional solution into a convex combination of integral solutions. Empirical studies driven by Google Cluster traces verify the efficacy of the randomized auction.",
"title": ""
},
{
"docid": "66c8bf3b0cfbfdf8add2fffd055b7f03",
"text": "This paper continues the long-standing tradition of gradually improving the construction speed of spatial acceleration structures using sorted Morton codes. Previous work on this topic forms a clear sequence where each new paper sheds more light on the nature of the problem and improves the hierarchy generation phase in terms of performance, simplicity, parallelism and generality. Previous approaches constructed the tree by firstly generating the hierarchy and then calculating the bounding boxes of each node by using a bottom-up traversal. Continuing the work, we present an improvement by providing a bottom-up method that finds each node’s parent while assigning bounding boxes, thus constructing the tree in linear time in a single kernel launch. Also, our method allows clustering the sorted points using an user-defined distance metric function.",
"title": ""
},
{
"docid": "41be57e3d2a09de36f3881cb08a028db",
"text": "Unified and formal knowledge models of the information security domain are fundamental requirements for supporting and enhancing existing risk management approaches. This paper describes a security ontology which provides an ontological structure for information security domain knowledge. Besides existing best-practice guidelines such as the German IT Grundschutz Manual also concrete knowledge of the considered organization is incorporated. An evaluation conducted by an information security expert team has shown that this knowledge model can be used to support a broad range of information security risk management approaches.",
"title": ""
},
{
"docid": "7e736d4f906a28d4796fe7ac404b5f94",
"text": "The internal program representation chosen for a software development environment plays a critical role in the nature of that environment. A form should facilitate implementation and contribute to the responsiveness of the environment to the user. The program dependence graph (PDG) may be a suitable internal form. It allows programs to be sliced in linear time for debugging and for use by language-directed editors. The slices obtained are more accurate than those obtained with existing methods because I/O is accounted for correctly and irrelevant statements on multi-statement lines are not displayed. The PDG may be interpreted in a data driven fashion or may have highly optimized (including vectorized) code produced from it. It is amenable to incremental data flow analysis, improving response time to the user in an interactive environment and facilitating debugging through data flow anomaly detection. It may also offer a good basis for software complexity metrics, adding to the completeness of an environment based on it.",
"title": ""
},
{
"docid": "82e13bf3c98942f69bc438535f882fbd",
"text": "While residential broadband Internet access is popular in many parts of the world, only a few studies have examined the characteristics of such traffic. In this paper we describe observations from monitoring the network activity for more than 20,000 residential DSL customers in an urban area. To ensure privacy, all data is immediately anonymized. We augment the anonymized packet traces with information about DSL-level sessions, IP (re-)assignments, and DSL link bandwidth.\n Our analysis reveals a number of surprises in terms of the mental models we developed from the measurement literature. For example, we find that HTTP - not peer-to-peer - traffic dominates by a significant margin; that more often than not the home user's immediate ISP connectivity contributes more to the round-trip times the user experiences than the WAN portion of the path; and that the DSL lines are frequently not the bottleneck in bulk-transfer performance.",
"title": ""
},
{
"docid": "3cd359e15635b8c9485df5c37118e607",
"text": "PROBLEM/CONDITION\nData from a population-based, multisite surveillance network were used to determine the prevalence of autism spectrum disorders (ASDs) among children aged 8 years in 14 areas of the United States and to describe the characteristics of these children.\n\n\nREPORTING PERIOD\n2002.\n\n\nMETHODS\nChildren aged 8 years were identified as having an ASD through screening and abstraction of evaluation records at health facilities for all 14 sites and through information from psychoeducational evaluations for special education services for 10 of the 14 sites. Case status was determined through clinician review of data abstracted from the records. Children whose parent(s) or legal guardian(s) resided in the respective areas in 2002 and whose records documented behaviors consistent with the Diagnostic and Statistical Manual, Fourth Edition, Text Revision (DSM-IV-TR) criteria for autistic disorder; pervasive developmental disorder, not otherwise specified; or Asperger disorder were classified as having ASDs.\n\n\nRESULTS\nFor 2002, of 407,578 children aged 8 years in the 14 surveillance areas, 2,685 (0.66%) were identified as having an ASD. ASD prevalence per 1,000 children aged 8 years ranged from 3.3 (Alabama) to 10.6 (New Jersey), with the majority of sites ranging from 5.2 to 7.6 (overall mean: 6.6 [i.e., one of every 152 children across all sites). ASD prevalence was significantly lower than all other sites in Alabama (p<0.001) and higher in New Jersey (p<0.0001). ASD prevalence varied by identification source, with higher average prevalence for ASDs in sites with access to health and education records (mean: 7.2) compared with sites with health records only (mean: 5.1). Five sites identified a higher prevalence of ASDs for non-Hispanic white children than for non-Hispanic black children. The ratio of males to females ranged from 3.4:1.0 in Maryland, South Carolina, and Wisconsin to 6.5:1.0 in Utah. The majority of children were receiving special education services at age 8 years and had a documented history of concerns regarding their development before age 3 years. However, the median age of earliest documented ASD diagnosis was much later (range: 49 months [Utah]--66 months [Alabama]). The proportion of children with characteristics consistent with the criteria for an ASD classification who had a previously documented ASD classification varied across sites. In the majority of sites, females with an ASD were more likely than males to have cognitive impairment. For the six sites for which prevalence data were available from both 2000 and 2002, ASD prevalence was stable in four sites and increased in two sites (17% in Georgia and 39% in West Virginia).\n\n\nINTERPRETATION\nResults from the second report of a U.S. multisite collaboration to monitor ASD prevalence demonstrated consistency of prevalence in the majority of sites, with variation in two sites. Prevalence was stable in the majority of sites for which 2 years of data were available, but an increase in West Virginia and a trend toward an increase in Georgia indicate the need for ongoing monitoring of ASD prevalence.\n\n\nPUBLIC HEALTH ACTIONS\nThese ASD prevalence data provide the most complete information on the prevalence of the ASDs in the United States to date. The data confirm that ASD prevalence is a continuing urgent public health concern affecting an approximate average of one child in every 150 and that efforts are needed to improve early identification of ASDs.",
"title": ""
},
{
"docid": "eab514f5951a9e2d3752002c7ba799d8",
"text": "In industrial fabric productions, automated real time systems are needed to find out the minor defects. It will save the cost by not transporting defected products and also would help in making compmay image of quality fabrics by sending out only undefected products. A real time fabric defect detection system (FDDS), implementd on an embedded DSP platform is presented here. Textural features of fabric image are extracted based on gray level co-occurrence matrix (GLCM). A sliding window technique is used for defect detection where window moves over the whole image computing a textural energy from the GLCM of the fabric image. The energy values are compared to a reference and the deviations beyond a threshold are reported as defects and also visually represented by a window. The implementation is carried out on a TI TMS320DM642 platform and programmed using code composer studio software. The real time output of this implementation was shown on a monitor. KeywordsFabric Defects, Texture, Grey Level Co-occurrence Matrix, DSP Kit, Energy Computation, Sliding Window, FDDS",
"title": ""
},
{
"docid": "fc32e7b46094c1cfe878c8324b91fcf2",
"text": "The recent increase in information technologies dedicated to optimal design, associated with the progress of the numerical tools for predicting ship hydrodynamic performances, allows significant improvement in ship design. A consortium of fourteen European partners – bringing together ship yards, model basins, consultants, research centres and universities – has therefore conducted a three years European R&D project (FANTASTIC) with the goal to improve the functional design of ship hull shapes. The following key issues were thus considered: parametric shape modelling was worked on through several complementary approaches, CFD tools and associated interfaces were enhanced to meet efficiency and robustness requirements, appropriate design space exploration and optimisation techniques were investigated. The resulting procedures where then implemented, for practical assessment purposes, in some end-users design environments, and a number of applications were undertaken.. Significant gains can be expected from this approach in design, in term of time used for performance analysis and explored range of design variations.",
"title": ""
},
{
"docid": "1772d22c19635b6636e42f8bb1b1a674",
"text": "• MacArthur Fellowship, 2010 • Guggenheim Fellowship, 2010 • Li Ka Shing Foundation Women in Science Distinguished Lectu re Series Award, 2010 • MIT Technology Review TR-35 Award (recognizing the world’s top innovators under the age of 35), 2009. • Okawa Foundation Research Award, 2008. • Sloan Research Fellow, 2007. • Best Paper Award, 2007 USENIX Security Symposium. • George Tallman Ladd Research Award, Carnegie Mellon Univer sity, 2007. • Highest ranked paper, 2006 IEEE Security and Privacy Sympos ium; paper invited to a special issue of the IEEE Transactions on Dependable and Secure Computing. • NSF CAREER Award on “Exterminating Large Scale Internet Att acks”, 2005. • IBM Faculty Award, 2005. • Highest ranked paper, 1999 IEEE Computer Security Foundati on Workshop; paper invited to a special issue of Journal of Computer Security.",
"title": ""
},
{
"docid": "9b0ed9c60666c36f8cf33631f791687d",
"text": "The central notion of Role-Based Access Control (RBAC) is that users do not have discretionary access to enterprise objects. Instead, access permissions are administratively associated with roles, and users are administratively made members of appropriate roles. This idea greatly simplifies management of authorization while providing an opportunity for great flexibility in specifying and enforcing enterprisespecific protection policies. Users can be made members of roles as determined by their responsibilities and qualifications and can be easily reassigned from one role to another without modifying the underlying access structure. Roles can be granted new permissions as new applications and actions are incorporated, and permissions can be revoked from roles as needed. Some users and vendors have recognized the potential benefits of RBAC without a precise definition of what RBAC constitutes. Some RBAC features have been implemented in commercial products without a frame of reference as to the functional makeup and virtues of RBAC [1]. This lack of definition makes it difficult for consumers to compare products and for vendors to get credit for the effectiveness of their products in addressing known security problems. To correct these deficiencies, a number of government sponsored research efforts are underway to define RBAC precisely in terms of its features and the benefits it affords. This research includes: surveys to better understand the security needs of commercial and government users [2], the development of a formal RBAC model, architecture, prototype, and demonstrations to validate its use and feasibility. As a result of these efforts, RBAC systems are now beginning to emerge. The purpose of this paper is to provide additional insight as to the motivations and functionality that might go behind the official RBAC name.",
"title": ""
},
{
"docid": "ac57fab046cfd02efa1ece262b07492f",
"text": "Interactive Narrative is an approach to interactive entertainment that enables the player to make decisions that directly affect the direction and/or outcome of the narrative experience being delivered by the computer system. Interactive narrative requires two seemingly conflicting requirements: coherent narrative and user agency. We present an interactive narrative system that uses a combination of narrative control and autonomous believable character agents to augment a story world simulation in which the user has a high degree of agency with narrative plot control. A drama manager called the Automated Story Director gives plot-based guidance to believable agents. The believable agents are endowed with the autonomy necessary to carry out directives in the most believable fashion possible. Agents also handle interaction with the user. When the user performs actions that change the world in such a way that the Automated Story Director can no longer drive the intended narrative forward, it is able to adapt the plot to incorporate the user’s changes and still achieve",
"title": ""
},
{
"docid": "3370a138771566427fde6208dac759b7",
"text": "Communication protocols determine how network components interact with each other. Therefore, the ability to derive a speci cation of a protocol can be useful in various contexts, such as to support deeper black-box testing or e ective defense mechanisms. Unfortunately, it is often hard to obtain the speci cation because systems implement closed (i.e., undocumented) protocols, or because a time consuming translation has to be performed, from the textual description of the protocol to a format readable by the tools. To address these issues, we propose a new methodology to automatically infer a speci cation of a protocol from network traces, which generates automata for the protocol language and state machine. Since our solution only resorts to interaction samples of the protocol, it is well-suited to uncover the message formats and protocol states of closed protocols and also to automate most of the process of specifying open protocols. The approach was implemented in ReverX and experimentally evaluated with publicly available FTP traces. Our results show that the inferred speci cation is a good approximation of the reference speci cation, exhibiting a high level of precision and recall.",
"title": ""
},
{
"docid": "9b9a2a9695f90a6a9a0d800192dd76f6",
"text": "Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.",
"title": ""
},
{
"docid": "96fceea1d65c319a40aefef58577859f",
"text": "PURPOSE\nTo evaluate the diagnostic utility of scaphoid dorsal subluxation on magnetic resonance imaging (MRI) as a predictor of scapholunate interosseous ligament (SLIL) tears and compare this with radiographic findings.\n\n\nMETHODS\nThirty-six MRIs were retrospectively reviewed: 18 with known operative findings of complete Geissler IV SLIL tears that were surgically repaired, and 18 MRIs performed for ulnar-sided wrist pain but no SLIL tear. Dorsal subluxation of the scaphoid was measured on the sagittal MRI cut, which demonstrated the maximum subluxation. Independent samples t tests were used to compare radiographic measurements of scapholunate (SL) gap, SL angle, and capitolunate/third metacarpal-lunate angles between the SLIL tear and the control groups and to compare radiographic measurements between wrists that had dorsal subluxation of the scaphoid and wrists that did not have dorsal subluxation. Interrater reliability of subluxation measurements on lateral radiographs and on MRI were calculated using kappa coefficients.\n\n\nRESULTS\nThirteen of 18 wrists with complete SLIL tears had greater than 10% dorsal subluxation of the scaphoid relative to the scaphoid facet. Average subluxation in this group was 34%. Four of 18 wrists with known SLIL tears had no subluxation. No wrists without SLIL tears (control group) had dorsal subluxation. The SL angle, capitolunate/third metacarpal-lunate angle and SL gap were greater in wrists that had dorsal subluxation of the scaphoid on MRI. Interrater reliability of measurements of dorsal subluxation of the scaphoid was superior on MRI than on lateral x-ray.\n\n\nCONCLUSIONS\nAn MRI demonstration of dorsal subluxation of the scaphoid, of as little as 10%, as a predictor of SLIL tear had a sensitivity of 72% and a specificity of 100%. The high positive predictive value indicates that the presence of dorsal subluxation accurately predicts SLIL tear.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nDiagnostic II.",
"title": ""
}
] |
scidocsrr
|
1edd1ffbef283d1cebfa1a3ce9e8a1ac
|
LabelRankT: incremental community detection in dynamic networks via label propagation
|
[
{
"docid": "a50ec2ab9d5d313253c6656049d608b3",
"text": "A cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process de ned on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let G be such a graph. The MCL algorithm simulates ow in G by rst identifying G in a canonical way with a Markov graph G1. Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs G(i). Flow expansion corresponds with taking the k power of a stochastic matrix, where k 2 IN . Flow contraction corresponds with a parametrized operator r, r 0, which maps the set of (column) stochastic matrices onto itself. The image rM is obtained by raising each entry in M to the r th power and rescaling each column to have sum 1 again. The heuristic underlying this approach is the expectation that ow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph G. Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process in uence the granularity of the output. The algorithm is space and time e cient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by rst considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature. 2000 Mathematics Subject Classi cation: 05B20, 15A48, 15A51, 62H30, 68R10, 68T10, 90C35.",
"title": ""
},
{
"docid": "f96bf84a4dfddc8300bb91227f78b3af",
"text": "Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy. The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and realworld networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.",
"title": ""
}
] |
[
{
"docid": "dde5083017c2db3ffdd90668e28bab4b",
"text": "Current industry standards for describing Web Services focus on ensuring interoperability across diverse platforms, but do not provide a good foundation for automating the use of Web Services. Representational techniques being developed for the Semantic Web can be used to augment these standards. The resulting Web Service specifications enable the development of software programs that can interpret descriptions of unfamiliar Web Services and then employ those services to satisfy user goals. OWL-S (“OWL for Services”) is a set of notations for expressing such specifications, based on the Semantic Web ontology language OWL. It consists of three interrelated parts: a profile ontology, used to describe what the service does; a process ontology and corresponding presentation syntax, used to describe how the service is used; and a grounding ontology, used to describe how to interact with the service. OWL-S can be used to automate a variety of service-related activities involving service discovery, interoperation, and composition. A large body of research on OWL-S has led to the creation of many open-source tools for developing, reasoning about, and dynamically utilizing Web Services.",
"title": ""
},
{
"docid": "5d624fadc5502ef0b65c227d4dd47a9a",
"text": "In this work, highly selective filters based on periodic arrays of electrically small resonators are pointed out. The high-pass filters are implemented in microstrip technology by etching complementary split ring resonators (CSRRs), or complementary spiral resonators (CSRs), in the ground plane, and series capacitive gaps, or interdigital capacitors, in the signal strip. The structure exhibits a composite right/left handed (CRLH) behavior and, by properly tuning the geometry of the elements, a high pass response with a sharp transition band is obtained. The low-pass filters, also implemented in microstrip technology, are designed by cascading open complementary split ring resonators (OCSRRs) in the signal strip. These low pass filters do also exhibit a narrow transition band. The high selectivity of these microwave filters is due to the presence of a transmission zero. Since the resonant elements are small, filter dimensions are compact. Several prototype device examples are reported in this paper.",
"title": ""
},
{
"docid": "eaa6daff2f28ea7f02861e8c67b9c72b",
"text": "The demand of fused magnesium furnaces (FMFs) refers to the average value of the power of the FMFs over a fixed period of time before the current time. The demand is an indicator of the electricity consumption of high energy-consuming FMFs. When the demand exceeds the limit of the Peak Demand (a predetermined maximum demand), the power supply of some FMF will be cut off to ensure that the demand is no more than Peak Demand. But the power cutoff will destroy the heat balance, reduce the quality and yield of the product. The composition change of magnesite in FMFs will cause demand spike occasionally, which a sudden increase in demand exceeds the limit and then drops below the limit. As a result, demand spike cause the power cutoff. In order to avoid the power cutoff at the moment of demand spike, the demand of FMFs needs to be forecasted. This paper analyzes the dynamic model of the demand of FMFs, using the power data, presents a data-driven demand forecasting method. This method consists of the following: PACF based decision module for the number of the input variables of the forecasting model, RBF neural network (RBFNN) based power variation rate forecasting model and demand forecasting model. Simulations based on actual data and industrial experiments at a fused magnesia plant show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "323abed1a623e49db50bed383ab26a92",
"text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.",
"title": ""
},
{
"docid": "9ffd665d6fe680fc4e7b9e57df48510c",
"text": "BACKGROUND\nIn light of the increasing rate of dengue infections throughout the world despite vector-control measures, several dengue vaccine candidates are in development.\n\n\nMETHODS\nIn a phase 3 efficacy trial of a tetravalent dengue vaccine in five Latin American countries where dengue is endemic, we randomly assigned healthy children between the ages of 9 and 16 years in a 2:1 ratio to receive three injections of recombinant, live, attenuated, tetravalent dengue vaccine (CYD-TDV) or placebo at months 0, 6, and 12 under blinded conditions. The children were then followed for 25 months. The primary outcome was vaccine efficacy against symptomatic, virologically confirmed dengue (VCD), regardless of disease severity or serotype, occurring more than 28 days after the third injection.\n\n\nRESULTS\nA total of 20,869 healthy children received either vaccine or placebo. At baseline, 79.4% of an immunogenicity subgroup of 1944 children had seropositive status for one or more dengue serotypes. In the per-protocol population, there were 176 VCD cases (with 11,793 person-years at risk) in the vaccine group and 221 VCD cases (with 5809 person-years at risk) in the control group, for a vaccine efficacy of 60.8% (95% confidence interval [CI], 52.0 to 68.0). In the intention-to-treat population (those who received at least one injection), vaccine efficacy was 64.7% (95% CI, 58.7 to 69.8). Serotype-specific vaccine efficacy was 50.3% for serotype 1, 42.3% for serotype 2, 74.0% for serotype 3, and 77.7% for serotype 4. Among the severe VCD cases, 1 of 12 was in the vaccine group, for an intention-to-treat vaccine efficacy of 95.5%. Vaccine efficacy against hospitalization for dengue was 80.3%. The safety profile for the CYD-TDV vaccine was similar to that for placebo, with no marked difference in rates of adverse events.\n\n\nCONCLUSIONS\nThe CYD-TDV dengue vaccine was efficacious against VCD and severe VCD and led to fewer hospitalizations for VCD in five Latin American countries where dengue is endemic. (Funded by Sanofi Pasteur; ClinicalTrials.gov number, NCT01374516.).",
"title": ""
},
{
"docid": "c3c7c392b4e7afedb269aa39e2b4680a",
"text": "The temporal-difference (TD) algorithm from reinforcement learning provides a simple method for incrementally learning predictions of upcoming events. Applied to classical conditioning, TD models suppose that animals learn a real-time prediction of the unconditioned stimulus (US) on the basis of all available conditioned stimuli (CSs). In the TD model, similar to other error-correction models, learning is driven by prediction errors--the difference between the change in US prediction and the actual US. With the TD model, however, learning occurs continuously from moment to moment and is not artificially constrained to occur in trials. Accordingly, a key feature of any TD model is the assumption about the representation of a CS on a moment-to-moment basis. Here, we evaluate the performance of the TD model with a heretofore unexplored range of classical conditioning tasks. To do so, we consider three stimulus representations that vary in their degree of temporal generalization and evaluate how the representation influences the performance of the TD model on these conditioning tasks.",
"title": ""
},
{
"docid": "907b8a8a8529b09114ae60e401bec1bd",
"text": "Studies of information seeking and workplace collaboration often find that social relationships are a strong factor in determining who collaborates with whom. Social networks provide one means of visualizing existing and potential interaction in organizational settings. Groupware designers are using social networks to make systems more sensitive to social situations and guide users toward effective collaborations. Yet, the implications of embedding social networks in systems have not been systematically studied. This paper details an evaluation of two different social networks used in a system to recommend individuals for possible collaboration. The system matches people looking for expertise with individuals likely to have expertise. The effectiveness of social networks for matching individuals is evaluated and compared. One finding is that social networks embedded into systems do not match individuals' perceptions of their personal social network. This finding and others raise issues for the use of social networks in groupware. Based on the evaluation results, several design considerations are discussed.",
"title": ""
},
{
"docid": "39351cdf91466aa12576d9eb475fb558",
"text": "Fault tolerance is a remarkable feature of biological systems and their self-repair capability influence modern electronic systems. In this paper, we propose a novel plastic neural network model, which establishes homeostasis in a spiking neural network. Combined with this plasticity and the inspiration from inhibitory interneurons, we develop a fault-resilient robotic controller implemented on an FPGA establishing obstacle avoidance task. We demonstrate the proposed methodology on a spiking neural network implemented on Xilinx Artix-7 FPGA. The system is able to maintain stable firing (tolerance ±10%) with a loss of up to 75% of the original synaptic inputs to a neuron. Our repair mechanism has minimal hardware overhead with a tuning circuit (repair unit) which consumes only three slices/neuron for implementing a threshold voltage-based homeostatic fault-tolerant unit. The overall architecture has a minimal impact on power consumption and, therefore, supports scalable implementations. This paper opens a novel way of implementing the behavior of natural fault tolerant system in hardware establishing homeostatic self-repair behavior.",
"title": ""
},
{
"docid": "8d02b303ad5fc96a082880d703682de4",
"text": "Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive <italic>regular clinical motifs</italic> from <italic> irregular episodic records</italic>. We present <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math> </inline-formula> (short for <italic>Deep</italic> <italic>r</italic>ecord), a new <italic>end-to-end</italic> deep learning system that learns to extract features from medical records and predicts future risk automatically. <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> permits transparent inspection and visualization of its inner working. We validate <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$ </tex-math></inline-formula> on hospital data to predict unplanned readmission after discharge. <inline-formula> <tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space.",
"title": ""
},
{
"docid": "ca8c262513466709a9d1eee198c804cc",
"text": "Theories of language production have long been expressed as connectionist models. We outline the issues and challenges that must be addressed by connectionist models of lexical access and grammatical encoding, and review three recent models. The models illustrate the value of an interactive activation approach to lexical access in production, the need for sequential output in both phonological and grammatical encoding, and the potential for accounting for structural effects on errors and structural priming from learning.",
"title": ""
},
{
"docid": "38d650cb945dc50d97762186585659a4",
"text": "Sustainable biofuels, biomaterials, and fine chemicals production is a critical matter that research teams around the globe are focusing on nowadays. Polyhydroxyalkanoates represent one of the biomaterials of the future due to their physicochemical properties, biodegradability, and biocompatibility. Designing efficient and economic bioprocesses, combined with the respective social and environmental benefits, has brought together scientists from different backgrounds highlighting the multidisciplinary character of such a venture. In the current review, challenges and opportunities regarding polyhydroxyalkanoate production are presented and discussed, covering key steps of their overall production process by applying pure and mixed culture biotechnology, from raw bioprocess development to downstream processing.",
"title": ""
},
{
"docid": "f923a3a18e8000e4094d4a6d6e69b18f",
"text": "We describe the functional and architectural breakdown of a monocular pedestrian detection system. We describe in detail our approach for single-frame classification based on a novel scheme of breaking down the class variability by repeatedly training a set of relatively simple classifiers on clusters of the training set. Single-frame classification performance results and system level performance figures for daytime conditions are presented with a discussion about the remaining gap to meet a daytime normal weather condition production system.",
"title": ""
},
{
"docid": "b4b66392aec0c4e00eb6b1cabbe22499",
"text": "ADJ: Adjectives that occur with the NP CMC: Orthographic features of the NP CPL: Phrases that occur with the NP VERB: Verbs that appear with the NP Task: Predict whether a noun phrase (NP) belongs to a category (e.g. “city”) Category # Examples animal 20,733 beverage 18,932 bird 19,263 bodypart 21,840 city 21,778 disease 21,827 drug 20,452 fish 19,162 food 19,566 fruit 18,911 muscle 21,606 person 21,700 protein 21,811 river 21,723 vegetable 18,826",
"title": ""
},
{
"docid": "4a6dc591d385d0fb02a98067d8a42f33",
"text": "A new field has emerged to investigate the cognitive neuroscience of social behaviour, the popularity of which is attested by recent conferences, special issues of journals and by books. But the theoretical underpinnings of this new field derive from an uneasy marriage of two different approaches to social behaviour: sociobiology and evolutionary psychology on the one hand, and social psychology on the other. The first approach treats the study of social behaviour as a topic in ethology, continuous with studies of motivated behaviour in other animals. The second approach has often emphasized the uniqueness of human behaviour, and the uniqueness of the individual person, their environment and their social surroundings. These two different emphases do not need to conflict with one another. In fact, neuroscience might offer a reconciliation between biological and psychological approaches to social behaviour in the realization that its neural regulation reflects both innate, automatic and COGNITIVELY IMPENETRABLE mechanisms, as well as acquired, contextual and volitional aspects that include SELF-REGULATION. We share the first category of features with other species, and we might be distinguished from them partly by elaborations on the second category of features. In a way, an acknowledgement of such an architecture simply provides detail to the way in which social cognition is complex — it is complex because it is not monolithic, but rather it consists of several tracks of information processing that can be variously recruited depending on the circumstances. Specifying those tracks, the conditions under which they are engaged, how they interact, and how they must ultimately be coordinated to regulate social behaviour in an adaptive fashion, is the task faced by a neuroscientific approach to social cognition.",
"title": ""
},
{
"docid": "95395c693b4cdfad722ae0c3545f45ef",
"text": "Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.",
"title": ""
},
{
"docid": "175229c7b756a2ce40f86e27efe28d53",
"text": "This paper describes a comparative study of the envelope extraction algorithms for the cardiac sound signal segmentation. In order to extract the envelope curves based on the time elapses of the first and the second heart sounds of cardiac sound signals, three representative algorithms such as the normalized average Shannon energy, the envelope information of Hilbert transform, and the cardiac sound characteristic waveform (CSCW) are introduced. Performance comparison of the envelope extraction algorithms, and the advantages and disadvantages of the methods are examined by some parameters. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5d80bf63f19f3aa271c0d16e179c90d6",
"text": "3D meshes are deployed in a wide range of application processes (e.g., transmission, compression, simplification, watermarking and so on) which inevitably introduce geometric distortions that may alter the visual quality of the rendered data. Hence, efficient model-based perceptual metrics, operating on the geometry of the meshes being compared, have been recently introduced to control and predict these visual artifacts. However, since the 3D models are ultimately visualized on 2D screens, it seems legitimate to use images of the models (i.e., snapshots from different viewpoints) to evaluate their visual fidelity. In this work we investigate the use of image metrics to assess the visual quality of 3D models. For this goal, we conduct a wide-ranging study involving several 2D metrics, rendering algorithms, lighting conditions and pooling algorithms, as well as several mean opinion score databases. The collected data allow (1) to determine the best set of parameters to use for this image-based quality assessment approach and (2) to compare this approach to the best performing model-based metrics and determine for which use-case they are respectively adapted. We conclude by exploring several applications that illustrate the benefits of image-based quality assessment.",
"title": ""
},
{
"docid": "19a28d8bbb1f09c56f5c85be003a9586",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
},
{
"docid": "bd9064905ba4ed166ad1e9c41eca7b34",
"text": "Governments worldwide are encouraging public agencies to join e-Government initiatives in order to provide better services to their citizens and businesses; hence, methods of evaluating the readiness of individual public agencies to execute specific e-Government programs and directives are a key ingredient in the successful expansion of e-Government. To satisfy this need, a model called the eGovernment Maturity Model (eGov-MM) was developed, integrating the assessment of technological, organizational, operational, and human capital capabilities, under a multi-dimensional, holistic, and evolutionary approach. The model is strongly supported by international best practices, and provides tuning mechanisms to enable its alignment with nation-wide directives on e-Government. This article describes how the model was conceived, designed, developed, field tested by expert public officials from several government agencies, and finally applied to a selection of 30 public agencies in Chile, generating the first formal measurements, assessments, and rankings of their readiness for eGovernment. The implementation of the model also provided several recommendations to policymakers at the national and agency levels.",
"title": ""
},
{
"docid": "e36e318dd134fd5840d5a5340eb6e265",
"text": "Business Intelligence (BI) promises a range of technologies for using information to ensure compliance to strategic and tactical objectives, as well as government laws and regulations. These technologies can be used in conjunction with conceptual models of business objectives, processes and situations (aka business schemas) to drive strategic decision-making about opportunities and threats etc. This paper focuses on three key concepts for strategic business models -situation, influence and indicator -and how they are used for strategic analysis. The semantics of these concepts are defined using a state-ofthe-art upper ontology (DOLCE+). We also propose a method for building a business schema, and demonstrate alternative ways of formal analysis of the schema based on existing tools for goal and probabilistic reasoning.",
"title": ""
}
] |
scidocsrr
|
9fc8731e7b2f7d8c4f17816f1d3b0626
|
Clickstream Analytics: An Experimental Analysis of the Amazon Users' Simulated Monthly Traffic
|
[
{
"docid": "3429145583d25ba1d603b5ade11f4312",
"text": "Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of which may substantially reduce the number of combinations to be examined. However, still encounters problems when a sequence database is large and/or when sequential patterns to be mined are numerous and/or long. In this paper, we propose a novel sequential pattern mining method, called PrefixSpan (i.e., Prefix-projected Sequential pattern mining), which explores prefixprojection in sequential pattern mining. PrefixSpan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover, prefix-projection substantially reduces the size of projected databases and leads to efficient processing. Our performance study shows that PrefixSpan outperforms both the -based GSP algorithm and another recently proposed method, FreeSpan, in mining large sequence",
"title": ""
}
] |
[
{
"docid": "12680d4fcf57a8a18d9c2e2b1107bf2d",
"text": "Recent advances in computer and technology resulted into ever increasing set of documents. The need is to classify the set of documents according to the type. Laying related documents together is expedient for decision making. Researchers who perform interdisciplinary research acquire repositories on different topics. Classifying the repositories according to the topic is a real need to analyze the research papers. Experiments are tried on different real and artificial datasets such as NEWS 20, Reuters, emails, research papers on different topics. Term Frequency-Inverse Document Frequency algorithm is used along with fuzzy K-means and hierarchical algorithm. Initially experiment is being carried out on small dataset and performed cluster analysis. The best algorithm is applied on the extended dataset. Along with different clusters of the related documents the resulted silhouette coefficient, entropy and F-measure trend are presented to show algorithm behavior for each data set.",
"title": ""
},
{
"docid": "db53ffe2196586d570ad636decbf67de",
"text": "We present PredRNN++, a recurrent network for spatiotemporal predictive learning. In pursuit of a great modeling capability for short-term video dynamics, we make our network deeper in time by leveraging a new recurrent structure named Causal LSTM with cascaded dual memories. To alleviate the gradient propagation difficulties in deep predictive models, we propose a Gradient Highway Unit, which provides alternative quick routes for the gradient flows from outputs back to long-range previous inputs. The gradient highway units work seamlessly with the causal LSTMs, enabling our model to capture the short-term and the long-term video dependencies adaptively. Our model achieves state-of-the-art prediction results on both synthetic and real video datasets, showing its power in modeling entangled motions.",
"title": ""
},
{
"docid": "1d0c9c8c439f5fa41fee964caed7c2b1",
"text": "As interactive voice response systems become more prevalent and provide increasingly more complex functionality, it becomes clear that the challenges facing such systems are not solely in their synthesis and recognition capabilities. Issues such as the coordination of turn exchanges between system and user also play an important role in system usability. In particular, both systems and users have difficulty determining when the other is taking or relinquishing the turn. In this paper, we seek to identify turn-taking cues correlated with human–human turn exchanges which are automatically computable. We compare the presence of potential prosodic, acoustic, and lexico-syntactic turn-yielding cues in prosodic phrases preceding turn changes (smooth switches) vs. turn retentions (holds) vs. backchannels in the Columbia Games Corpus, a large corpus of task-oriented dialogues, to determine which features reliably distinguish between these three. We identify seven turn-yielding cues, all of which can be extracted automatically, for future use in turn generation and recognition in interactive voice response (IVR) systems. Testing Duncan’s (1972) hypothesis that these turn-yielding cues are linearly correlated with the occurrence of turn-taking attempts, we further demonstrate that, the greater the number of turn-yielding cues that are present, the greater the likelihood that a turn change will occur. We also identify six cues that precede backchannels, which will also be useful for IVR backchannel generation and recognition; these cues correlate with backchannel occurrence in a quadratic manner. We find similar results for overlapping and for non-overlapping speech. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1b0ebf54bc1d534affc758ced7aef8de",
"text": "We report our study of a silica-water interface using reactive molecular dynamics. This first-of-its-kind simulation achieves length and time scales required to investigate the detailed chemistry of the system. Our molecular dynamics approach is based on the ReaxFF force field of van Duin et al. [J. Phys. Chem. A 107, 3803 (2003)]. The specific ReaxFF implementation (SERIALREAX) and force fields are first validated on structural properties of pure silica and water systems. Chemical reactions between reactive water and dangling bonds on a freshly cut silica surface are analyzed by studying changing chemical composition at the interface. In our simulations, reactions involving silanol groups reach chemical equilibrium in approximately 250 ps. It is observed that water molecules penetrate a silica film through a proton-transfer process we call \"hydrogen hopping,\" which is similar to the Grotthuss mechanism. In this process, hydrogen atoms pass through the film by associating and dissociating with oxygen atoms within bulk silica, as opposed to diffusion of intact water molecules. The effective diffusion constant for this process, taken to be that of hydrogen atoms within silica, is calculated to be 1.68 x 10(-6) cm(2)/s. Polarization of water molecules in proximity of the silica surface is also observed. The subsequent alignment of dipoles leads to an electric potential difference of approximately 10.5 V between the silica slab and water.",
"title": ""
},
{
"docid": "1db72cafa214f41b5b6faa3a3c0c8be0",
"text": "Multiple-antenna receivers offer numerous advantages over single-antenna receivers, including sensitivity improvement, ability to reject interferers spatially and enhancement of data-rate or link reliability via MIMO. In the recent past, RF/analog phased-array receivers have been investigated [1-4]. On the other hand, digital beamforming offers far greater flexibility, including ability to form multiple simultaneous beams, ease of digital array calibration and support for MIMO. However, ADC dynamic range is challenged due to the absence of spatial interference rejection at RF/analog.",
"title": ""
},
{
"docid": "74ccb28a31d5a861bea1adfaab2e9bf1",
"text": "For many decades CMOS devices have been successfully scaled down to achieve higher speed and increased performance of integrated circuits at lower cost. Today’s charge-based CMOS electronics encounters two major challenges: power dissipation and variability. Spintronics is a rapidly evolving research and development field, which offers a potential solution to these issues by introducing novel ‘more than Moore’ devices. Spin-based magnetoresistive random-access memory (MRAM) is already recognized as one of the most promising candidates for future universal memory. Magnetic tunnel junctions, the main elements of MRAM cells, can also be used to build logic-in-memory circuits with non-volatile storage elements on top of CMOS logic circuits, as well as versatile compact on-chip oscillators with low power consumption. We give an overview of CMOS-compatible spintronics applications. First, we present a brief introduction to the physical background considering such effects as magnetoresistance, spin-transfer torque (STT), spin Hall effect, and magnetoelectric effects. We continue with a comprehensive review of the state-of-the-art spintronic devices for memory applications (STT-MRAM, domain wallmotion MRAM, and spin–orbit torque MRAM), oscillators (spin torque oscillators and spin Hall nano-oscillators), logic (logic-in-memory, all-spin logic, and buffered magnetic logic gate grid), sensors, and random number generators. Devices with different types of resistivity switching are analyzed and compared, with their advantages highlighted and challenges revealed. CMOScompatible spintronic devices are demonstrated beginning with predictive simulations, proceeding to their experimental confirmation and realization, and finalized by the current status of application in modern integrated systems and circuits. We conclude the review with an outlook, where we share our vision on the future applications of the prospective devices in the area.",
"title": ""
},
{
"docid": "285587e0e608d8bafa0962b5cf561205",
"text": "BACKGROUND\nGeneralized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap.\n\n\nMETHODS\nParameters in GAMAR are estimated by maximum partial likelihood using modified Newton's method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1.\n\n\nRESULTS\nIn the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR.\n\n\nCONCLUSIONS\nGAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies.",
"title": ""
},
{
"docid": "19f8ae070aa161ca1399b21b6a9c4678",
"text": "Wireless Sensor Network (WSN) is a large scale network with from dozens to thousands tiny devices. Using fields of WSNs (military, health, smart home e.g.) has a large-scale and its usage areas increasing day by day. Secure issue of WSNs is an important research area and applications of WSN have some big security deficiencies. Intrusion Detection System is a second-line of the security mechanism for networks, and it is very important to integrity, confidentiality and availability. Intrusion Detection in WSNs is somewhat different from wired and non-energy constraint wireless network because WSN has some constraints influencing cyber security approaches and attack types. This paper is a survey describing attack types of WSNs intrusion detection approaches being against to this attack types.",
"title": ""
},
{
"docid": "3753bd82d038b2b2b7f03812480fdacd",
"text": "BACKGROUND\nDuring the last few years, an increasing number of unstable thoracolumbar fractures, especially in elderly patients, has been treated by dorsal instrumentation combined with a balloon kyphoplasty. This combination provides additional stabilization to the anterior spinal column without any need for a second ventral approach.\n\n\nCASE PRESENTATION\nWe report the case of a 97-year-old male patient with a lumbar burst fracture (type A3-1.1 according to the AO Classification) who presented prolonged neurological deficits of the lower limbs - grade C according to the modified Frankel/ASIA score. After a posterior realignment of the fractured vertebra with an internal screw fixation and after an augmentation with non-absorbable cement in combination with a balloon kyphoplasty, the patient regained his mobility without any neurological restrictions.\n\n\nCONCLUSION\nEspecially in older patients, the presented technique of PMMA-augmented pedicle screw instrumentation combined with balloon-assisted kyphoplasty could be an option to address unstable vertebral fractures in \"a minor-invasive way\". The standard procedure of a two-step dorsoventral approach could be reduced to a one-step procedure.",
"title": ""
},
{
"docid": "0ad47e79e9bea44a76029e1f24f0a16c",
"text": "The requirements for OLTP database systems are becoming ever more demanding. New OLTP applications require high degrees of scalability with controlled transaction latencies in in-memory databases. Deployments of these applications require low-level control of database system overhead and program-to-data affinity to maximize resource utilization in modern machines. Unfortunately, current solutions fail to meet these requirements. First, existing database solutions fail to expose a high-level programming abstraction in which latency of transactions can be reasoned about by application developers. Second, these solutions limit infrastructure engineers in exercising low-level control on the deployment of the system on a target infrastructure, further impacting performance. In this paper, we propose a relational actor programming model for in-memory databases. Conceptually, relational actors, or reactors for short, are application-defined, isolated logical actors encapsulating relations that process function calls asynchronously. Reactors ease reasoning about correctness by guaranteeing serializability of application-level function calls. In contrast to classic transactional models, however, reactors allow developers to take advantage of intra-transaction parallelism to reduce latency and improve performance. Moreover, reactors enable a new degree of flexibility in database deployment. We present REACTDB, a novel system design exposing reactors that allows for flexible virtualization of database architecture between the extremes of shared-nothing and shared-everything without changes to application code. Our experiments with REACTDB illustrate performance predictability, multi-core scalability, and low overhead in OLTP benchmarks.",
"title": ""
},
{
"docid": "e12d800b09f2f8f19a138b25d8a8d363",
"text": "This paper proposes a corpus-based approach for answering why-questions. Conventional systems use hand-crafted patterns to extract and evaluate answer candidates. However, such hand-crafted patterns are likely to have low coverage of causal expressions, and it is also difficult to assign suitable weights to the patterns by hand. In our approach, causal expressions are automatically collected from corpora tagged with semantic relations. From the collected expressions, features are created to train an answer candidate ranker that maximizes the QA performance with regards to the corpus of why-questions and answers. NAZEQA, a Japanese why-QA system based on our approach, clearly outperforms a baseline that uses hand-crafted patterns with a Mean Reciprocal Rank (top-5) of 0.305, making it presumably the best-performing fully implemented why-QA system.",
"title": ""
},
{
"docid": "dd1a7e3493b9164af4321db944b4950c",
"text": "The emerging optical/wireless topology reconfiguration technologies have shown great potential in improving the performance of data center networks. However, it also poses a big challenge on how to find the best topology configurations to support the dynamic traffic demands. In this work, we present xWeaver, a traffic-driven deep learning solution to infer the high-performance network topology online. xWeaver supports a powerful network model that enables the topology optimization over different performance metrics and network architectures. With the design of properly-structured neural networks, it can automatically derive the critical traffic patterns from data traces and learn the underlying mapping between the traffic patterns and topology configurations specific to the target data center. After offline training, xWeaver generates the optimized (or near-optimal) topology configuration online, and can also smoothly update its model parameters for new traffic patterns. We build an optical-circuit-switch-based testbed to demonstrate the function and transmission efficiency of our proposed solution. We further perform extensive simulations to show the significant performance gain of xWeaver, in supporting higher network throughput and smaller flow completion time.",
"title": ""
},
{
"docid": "5aa219f23d4be5d18ace0aa0b0b51b76",
"text": "An improved bandgap reference with high power supply rejection (PSR) is presented. The proposed circuit consists of a simple voltage subtractor circuit incorporated into the conventional Brokaw bandgap reference. Essentially, the subtractor feeds the supply noise directly into the feedback loop of the bandgap circuit which could help to suppress supply noise. The simulation results have been shown to conform well with the theoretical evaluation. The proposed circuit has also shown robust performance across temperature and process variations. where PSRRl is the power supply rejection ratio of opamp and is given by PSRRl = A 1 / A d d l . Also, gmQI . P 2 = gmQz , and A I and A d d l are the PI = g m Q , + R , + R 2 gn~Q2+~3 open-loop differential gain and power gain of amplifier respectively.",
"title": ""
},
{
"docid": "3a21628b7ca55d2910da220f0c866bea",
"text": "BACKGROUND\nType 2 diabetes is associated with a substantially increased risk of cardiovascular disease, but the role of lipid-lowering therapy with statins for the primary prevention of cardiovascular disease in diabetes is inadequately defined. We aimed to assess the effectiveness of atorvastatin 10 mg daily for primary prevention of major cardiovascular events in patients with type 2 diabetes without high concentrations of LDL-cholesterol.\n\n\nMETHODS\n2838 patients aged 40-75 years in 132 centres in the UK and Ireland were randomised to placebo (n=1410) or atorvastatin 10 mg daily (n=1428). Study entrants had no documented previous history of cardiovascular disease, an LDL-cholesterol concentration of 4.14 mmol/L or lower, a fasting triglyceride amount of 6.78 mmol/L or less, and at least one of the following: retinopathy, albuminuria, current smoking, or hypertension. The primary endpoint was time to first occurrence of the following: acute coronary heart disease events, coronary revascularisation, or stroke. Analysis was by intention to treat.\n\n\nFINDINGS\nThe trial was terminated 2 years earlier than expected because the prespecified early stopping rule for efficacy had been met. Median duration of follow-up was 3.9 years (IQR 3.0-4.7). 127 patients allocated placebo (2.46 per 100 person-years at risk) and 83 allocated atorvastatin (1.54 per 100 person-years at risk) had at least one major cardiovascular event (rate reduction 37% [95% CI -52 to -17], p=0.001). Treatment would be expected to prevent at least 37 major vascular events per 1000 such people treated for 4 years. Assessed separately, acute coronary heart disease events were reduced by 36% (-55 to -9), coronary revascularisations by 31% (-59 to 16), and rate of stroke by 48% (-69 to -11). Atorvastatin reduced the death rate by 27% (-48 to 1, p=0.059). No excess of adverse events was noted in the atorvastatin group.\n\n\nINTERPRETATION\nAtorvastatin 10 mg daily is safe and efficacious in reducing the risk of first cardiovascular disease events, including stroke, in patients with type 2 diabetes without high LDL-cholesterol. No justification is available for having a particular threshold level of LDL-cholesterol as the sole arbiter of which patients with type 2 diabetes should receive statins. The debate about whether all people with this disorder warrant statin treatment should now focus on whether any patients are at sufficiently low risk for this treatment to be withheld.",
"title": ""
},
{
"docid": "a99b1a9409ea1241695590814e685828",
"text": "A two-phase heat spreader has been developed for cooling high heat flux sources in high-power lasers, high-intensity light-emitting diodes (LEDs), and semiconductor power devices. The heat spreader uses a passive mechanism to cool heat sources with fluxes as high as 5 W/mm2 without requiring any active power consumption for the thermal solution. The prototype is similar to a vapor chamber in which water is injected into an evacuated, air-tight shell. The shell consists of an evaporator plate, a condenser plate and an adiabatic section. The heat source is made from aluminum nitride, patterned with platinum. The heat source contains a temperature sensor and is soldered to a copper substrate that serves as the evaporator. Tests were performed with several different evaporator microstructures at different heat loads. A screen mesh was able to dissipate heat loads of 2 W/mm2, but at unacceptably high evaporator temperatures. For sintered copper powder with a 50 µm particle diameter, a heat load of 8.5 W/mm2 was supported, without the occurrence of dryout. A sintered copper powder surface coated with multi-walled carbon nanotubes (CNT) that were rendered hydrophilic showed a lowered thermal resistance for the device.",
"title": ""
},
{
"docid": "a274e05ba07259455d0e1fef57f2c613",
"text": "Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover images. The Least Significant Bit (LSB) steganography that replaces the least significant bits of the host medium is a widely used technique with low computational complexity and high insertion capacity. Although it has good perceptual transparency, it is vulnerable to steganalysis which is based on histogram analysis. In all the existing schemes detection of a secret message in a cover image can be easily detected from the histogram analysis and statistical analysis. Therefore developing new LSB steganography algorithms against statistical and histogram analysis is the prime requirement.",
"title": ""
},
{
"docid": "37a47bd2561b534d5734d250d16ff1c2",
"text": "Many chronic eye diseases can be conveniently investigated by observing structural changes in retinal blood vessel diameters. However, detecting changes in an accurate manner in face of interfering pathologies is a challenging task. The task is generally performed through an automatic computerized process. The literature shows that powerful methods have already been proposed to identify vessels in retinal images. Though a significant progress has been achieved toward methods to separate blood vessels from the uneven background, the methods still lack the necessary sensitivity to segment fine vessels. Recently, a multi-scale line-detector method proved its worth in segmenting thin vessels. This paper presents modifications to boost the sensitivity of this multi-scale line detector. First, a varying window size with line-detector mask is suggested to detect small vessels. Second, external orientations are fed to steer the multi-scale line detectors into alignment with flow directions. Third, optimal weights are suggested for weighted linear combinations of individual line-detector responses. Fourth, instead of using one global threshold, a hysteresis threshold is proposed to find a connected vessel tree. The overall impact of these modifications is a large improvement in noise removal capability of the conventional multi-scale line-detector method while finding more of the thin vessels. The contrast-sensitive steps are validated using a publicly available database and show considerable promise for the suggested strategy.",
"title": ""
},
{
"docid": "3b62ccd8e989d81f86b557e8d35a8742",
"text": "The ability to accurately judge the similarity between natural language sentences is critical to the performance of several applications such as text mining, question answering, and text summarization. Given two sentences, an effective similarity measure should be able to determine whether the sentences are semantically equivalent or not, taking into account the variability of natural language expression. That is, the correct similarity judgment should be made even if the sentences do not share similar surface form. In this work, we evaluate fourteen existing text similarity measures which have been used to calculate similarity score between sentences in many text applications. The evaluation is conducted on three different data sets, TREC9 question variants, Microsoft Research paraphrase corpus, and the third recognizing textual entailment data set.",
"title": ""
},
{
"docid": "c43ad751dade7d0a5a396f95cc904030",
"text": "The electric grid is radically evolving and transforming into the smart grid, which is characterized by improved energy efficiency and manageability of available resources. Energy management (EM) systems, often integrated with home automation systems, play an important role in the control of home energy consumption and enable increased consumer participation. These systems provide consumers with information about their energy consumption patterns and help them adopt energy-efficient behavior. The new generation EM systems leverage advanced analytics and communication technologies to offer consumers actionable information and control features, while ensuring ease of use, availability, security, and privacy. In this article, we present a survey of the state of the art in EM systems, applications, and frameworks. We define a set of requirements for EM systems and evaluate several EM systems in this context. We also discuss emerging trends in this area.",
"title": ""
},
{
"docid": "a7fa5171308a566a19da39ee6d7b74f6",
"text": "Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.",
"title": ""
}
] |
scidocsrr
|
8ac37d86ad7ea7c70031ac22ebb19981
|
The red one!: On learning to refer to things based on discriminative properties
|
[
{
"docid": "08768f6cf1305884a735bbe4e7e98474",
"text": "Language is sensitive to both semantic and pragmatic effects. To capture both effects, we model language use as a cooperative game between two players: a speaker, who generates an utterance, and a listener, who responds with an action. Specifically, we consider the task of generating spatial references to objects, wherein the listener must accurately identify an object described by the speaker. We show that a speaker model that acts optimally with respect to an explicit, embedded listener model substantially outperforms one that is trained to directly generate spatial descriptions.",
"title": ""
},
{
"docid": "f6a66ea4a5e8683bae76e71912694874",
"text": "We consider the task of learning visual connections between object categories using the ImageNet dataset, which is a large-scale dataset ontology containing more than 15 thousand object classes. We want to discover visual relationships between the classes that are currently missing (such as similar colors or shapes or textures). In this work we learn 20 visual attributes and use them both in a zero-shot transfer learning experiment as well as to make visual connections between semantically unrelated object categories.",
"title": ""
}
] |
[
{
"docid": "388f4a555c7aa004f081cbdc6bc0f799",
"text": "We present a multi-GPU version of GPUSPH, a CUDA implementation of fluid-dynamics models based on the smoothed particle hydrodynamics (SPH) numerical method. The SPH is a well-known Lagrangian model for the simulation of free-surface fluid flows; it exposes a high degree of parallelism and has already been successfully ported to GPU. We extend the GPU-based simulator to run simulations on multiple GPUs simultaneously, to obtain a gain in speed and overcome the memory limitations of using a single device. The computational domain is spatially split with minimal overlapping and shared volume slices are updated at every iteration of the simulation. Data transfers are asynchronous with computations, thus completely covering the overhead introduced by slice exchange. A simple yet effective load balancing policy preserves the performance in case of unbalanced simulations due to asymmetric fluid topologies. The obtained speedup factor (up to 4.5x for 6 GPUs) closely follows the expected one (5x for 6 GPUs) and it is possible to run simulations with a higher number of particles than would fit on a single device. We use the Karp-Flatt metric to formally estimate the overall efficiency of the parallelization.",
"title": ""
},
{
"docid": "d6ca38ccad91c0c2c51ba3dd5be454b2",
"text": "Dirty data is a serious problem for businesses leading to incorrect decision making, inefficient daily operations, and ultimately wasting both time and money. Dirty data often arises when domain constraints and business rules, meant to preserve data consistency and accuracy, are enforced incompletely or not at all in application code. In this work, we propose a new data-driven tool that can be used within an organization’s data quality management process to suggest possible rules, and to identify conformant and non-conformant records. Data quality rules are known to be contextual, so we focus on the discovery of context-dependent rules. Specifically, we search for conditional functional dependencies (CFDs), that is, functional dependencies that hold only over a portion of the data. The output of our tool is a set of functional dependencies together with the context in which they hold (for example, a rule that states for CS graduate courses, the course number and term functionally determines the room and instructor). Since the input to our tool will likely be a dirty database, we also search for CFDs that almost hold. We return these rules together with the non-conformant records (as these are potentially dirty records). We present effective algorithms for discovering CFDs and dirty values in a data instance. Our discovery algorithm searches for minimal CFDs among the data values and prunes redundant candidates. No universal objective measures of data quality or data quality rules are known. Hence, to avoid returning an unnecessarily large number of CFDs and only those that are most interesting, we evaluate a set of interest metrics and present comparative results using real datasets. We also present an experimental study showing the scalability of our techniques.",
"title": ""
},
{
"docid": "72cfe76ea68d5692731531aea02444d0",
"text": "Primary human tumor culture models allow for individualized drug sensitivity testing and are therefore a promising technique to achieve personalized treatment for cancer patients. This would especially be of interest for patients with advanced stage head and neck cancer. They are extensively treated with surgery, usually in combination with high-dose cisplatin chemoradiation. However, adding cisplatin to radiotherapy is associated with an increase in severe acute toxicity, while conferring only a minor overall survival benefit. Hence, there is a strong need for a preclinical model to identify patients that will respond to the intended treatment regimen and to test novel drugs. One of such models is the technique of culturing primary human tumor tissue. This review discusses the feasibility and success rate of existing primary head and neck tumor culturing techniques and their corresponding chemo- and radiosensitivity assays. A comprehensive literature search was performed and success factors for culturing in vitro are debated, together with the actual value of these models as preclinical prediction assay for individual patients. With this review, we aim to fill a gap in the understanding of primary culture models from head and neck tumors, with potential importance for other tumor types as well.",
"title": ""
},
{
"docid": "0772a2f393b1820e6fa8970cc14339a2",
"text": "The internet is empowering the rise of crowd work, gig work, and other forms of on--demand labor. A large and growing body of scholarship has attempted to predict the socio--technical outcomes of this shift, especially addressing three questions: begin{inlinelist} item What are the complexity limits of on-demand work?, item How far can work be decomposed into smaller microtasks?, and item What will work and the place of work look like for workers' end {inlinelist} In this paper, we look to the historical scholarship on piecework --- a similar trend of work decomposition, distribution, and payment that was popular at the turn of the nth{20} century --- to understand how these questions might play out with modern on--demand work. We identify the mechanisms that enabled and limited piecework historically, and identify whether on--demand work faces the same pitfalls or might differentiate itself. This approach introduces theoretical grounding that can help address some of the most persistent questions in crowd work, and suggests design interventions that learn from history rather than repeat it.",
"title": ""
},
{
"docid": "cd48180e93d25858410222fff4b1f43e",
"text": "Metaphors pervade discussions of social issues like climate change, the economy, and crime. We ask how natural language metaphors shape the way people reason about such social issues. In previous work, we showed that describing crime metaphorically as a beast or a virus, led people to generate different solutions to a city's crime problem. In the current series of studies, instead of asking people to generate a solution on their own, we provided them with a selection of possible solutions and asked them to choose the best ones. We found that metaphors influenced people's reasoning even when they had a set of options available to compare and select among. These findings suggest that metaphors can influence not just what solution comes to mind first, but also which solution people think is best, even when given the opportunity to explicitly compare alternatives. Further, we tested whether participants were aware of the metaphor. We found that very few participants thought the metaphor played an important part in their decision. Further, participants who had no explicit memory of the metaphor were just as much affected by the metaphor as participants who were able to remember the metaphorical frame. These findings suggest that metaphors can act covertly in reasoning. Finally, we examined the role of political affiliation on reasoning about crime. The results confirm our previous findings that Republicans are more likely to generate enforcement and punishment solutions for dealing with crime, and are less swayed by metaphor than are Democrats or Independents.",
"title": ""
},
{
"docid": "a478928c303153172133d805ac35c6cc",
"text": "Chest X-ray is one of the most accessible medical imaging technique for diagnosis of multiple diseases. With the availability of ChestX-ray14, which is a massive dataset of chest X-ray images and provides annotations for 14 thoracic diseases; it is possible to train Deep Convolutional Neural Networks (DCNN) to build Computer Aided Diagnosis (CAD) systems. In this work, we experiment a set of deep learning models and present a cascaded deep neural network that can diagnose all 14 pathologies better than the baseline and is competitive with other published methods. Our work provides the quantitative results to answer following research questions for the dataset: 1) What loss functions to use for training DCNN from scratch on ChestXray14 dataset that demonstrates high class imbalance and label co occurrence? 2) How to use cascading to model label dependency and to improve accuracy of the deep learning model?",
"title": ""
},
{
"docid": "8f089d55c0ce66db7bbf27476267a8e5",
"text": "Planning radar sites is very important for several civilian and military applications. Depending on the security or defence issue different requirements exist regarding the radar coverage and the radar sites. QSiteAnalysis offers several functions to automate, improve and speed up this highly complex task. Wave propagation effects such as diffraction, refraction, multipath and atmospheric attenuation are considered for the radar coverage calculation. Furthermore, an automatic optimisation of the overall coverage is implemented by optimising the radar sites. To display the calculation result, the calculated coverage is visualised in 2D and 3D. Therefore, QSiteAnalysis offers several functions to improve and automate radar site studies.",
"title": ""
},
{
"docid": "e33fa3ebbd612dbc6e76feebde52d3d9",
"text": "In this paper, we introduce a general iterative human-machine collaborative method for training crowdsource workers: the classifier (i.e., the machine) selects the highest quality examples for training the crowdsource workers (i.e., the humans). Then, the latter annotate the lower quality examples such that the classifier can be re-trained with more accurate examples. This process can be iterated several times. We tested our approach on two different tasks, Relation Extraction and Community Question Answering, which are also in two different languages, English and Arabic, respectively. Our experimental results show a significant improvement for creating Gold Standard data over distant supervision or just crowdsourcing without worker training. At the same time, our method approach the performance than state-of-the-art methods using expensive Gold Standard for training workers",
"title": ""
},
{
"docid": "34dcd712c5eae560f3d611fcc8ef9825",
"text": "Do I understand the problem of P vs. NP? The answer is a simple \"no\". If I were to understand the problem, I would've solved it as well\" — This is the current state of many theoretical computer scientists around the world. Apart from a bag of laureates waiting for the person who successfully understands this most popular millennium prize riddle, this is also considered to be a game changer in both mathematics and computer science. According to Scott Aaronson, \"If P = NP, then the world would be a profoundly different place than we usually assume it to be\". The speaker intends to share the part that he understood on the problem, and about the efforts that were recently put-forth in cracking the same.",
"title": ""
},
{
"docid": "d28d956c271189f4909ed11f0e5c342a",
"text": "This article presents new oscillation criteria for the second-order delay differential equation (p(t)(x′(t))α)′ + q(t)x(t− τ) + n X i=1 qi(t)x αi (t− τ) = e(t) where τ ≥ 0, p(t) ∈ C1[0,∞), q(t), qi(t), e(t) ∈ C[0,∞), p(t) > 0, α1 > · · · > αm > α > αm+1 > · · · > αn > 0 (n > m ≥ 1), α1, . . . , αn and α are ratio of odd positive integers. Without assuming that q(t), qi(t) and e(t) are nonnegative, the results in [6, 8] have been extended and a mistake in the proof of the results in [3] is corrected.",
"title": ""
},
{
"docid": "7ebaee3df1c8ee4bf1c82102db70f295",
"text": "Small cells such as femtocells overlaying the macrocells can enhance the coverage and capacity of cellular wireless networks and increase the spectrum efficiency by reusing the frequency spectrum assigned to the macrocells in a universal frequency reuse fashion. However, management of both the cross-tier and co-tier interferences is one of the most critical issues for such a two-tier cellular network. Centralized solutions for interference management in a two-tier cellular network with orthogonal frequency-division multiple access (OFDMA), which yield optimal/near-optimal performance, are impractical due to the computational complexity. Distributed solutions, on the other hand, lack the superiority of centralized schemes. In this paper, we propose a semi-distributed (hierarchical) interference management scheme based on joint clustering and resource allocation for femtocells. The problem is formulated as a mixed integer non-linear program (MINLP). The solution is obtained by dividing the problem into two sub-problems, where the related tasks are shared between the femto gateway (FGW) and femtocells. The FGW is responsible for clustering, where correlation clustering is used as a method for femtocell grouping. In this context, a low-complexity approach for solving the clustering problem is used based on semi-definite programming (SDP). In addition, an algorithm is proposed to reduce the search range for the best cluster configuration. For a given cluster configuration, within each cluster, one femto access point (FAP) is elected as a cluster head (CH) that is responsible for resource allocation among the femtocells in that cluster. The CH performs sub-channel and power allocation in two steps iteratively, where a low-complexity heuristic is proposed for the sub-channel allocation phase. Numerical results show the performance gains due to clustering in comparison to other related schemes. Also, the proposed correlation clustering scheme offers performance, which is close to that of the optimal clustering, with a lower complexity.",
"title": ""
},
{
"docid": "5403ebc5a8fc5789809145fb8114bb63",
"text": "This paper explores why occupational therapists use arts and crafts as therapeutic modalities. Beginning with the turn-of-the-century origins of occupational therapy, the paper traces the similarities and differences in the ideas and beliefs of the founders of occupational therapy and the proponents of the arts-and-crafts movement.",
"title": ""
},
{
"docid": "4c29f5ffaeff5911e3d5f7a85146c601",
"text": "In August 2004, Duke University provided free iPods to its entire freshman class (Belanger, 2005). The next month, a Korean education firm offered free downloadable college entrance exam lectures to students who purchased an iRiver personal multimedia player (Kim, 2004). That October, a financial trading firm in Chicago was reportedly assessing the hand-eye coordination of traders’ using GameBoys (Logan, 2004). Yet while such innovative applications abound, the use of technology in education and training is far from new, a fact as true in language classrooms as it is in medical schools.",
"title": ""
},
{
"docid": "f5352a1eee7340bf7c7e37b1210c7b99",
"text": "In recent years, traditional cybersecurity safeguards have proven ineffective against insider threats. Famous cases of sensitive information leaks caused by insiders, including the WikiLeaks release of diplomatic cables and the Edward Snowden incident, have greatly harmed the U.S. government's relationship with other governments and with its own citizens. Data Leak Prevention (DLP) is a solution for detecting and preventing information leaks from within an organization's network. However, state-of-art DLP detection models are only able to detect very limited types of sensitive information, and research in the field has been hindered due to the lack of available sensitive texts. Many researchers have focused on document-based detection with artificially labeled “confidential documents” for which security labels are assigned to the entire document, when in reality only a portion of the document is sensitive. This type of whole-document based security labeling increases the chances of preventing authorized users from accessing non-sensitive information within sensitive documents. In this paper, we introduce Automated Classification Enabled by Security Similarity (ACESS), a new and innovative detection model that penetrates the complexity of big text security classification/detection. To analyze the ACESS system, we constructed a novel dataset, containing formerly classified paragraphs from diplomatic cables made public by the WikiLeaks organization. To our knowledge this paper is the first to analyze a dataset that contains actual formerly sensitive information annotated at paragraph granularity.",
"title": ""
},
{
"docid": "68473e74e1c188d41f4ea42028728a18",
"text": "The mastery of fundamental movement skills (FMS) has been purported as contributing to children's physical, cognitive and social development and is thought to provide the foundation for an active lifestyle. Commonly developed in childhood and subsequently refined into context- and sport-specific skills, they include locomotor (e.g. running and hopping), manipulative or object control (e.g. catching and throwing) and stability (e.g. balancing and twisting) skills. The rationale for promoting the development of FMS in childhood relies on the existence of evidence on the current or future benefits associated with the acquisition of FMS proficiency. The objective of this systematic review was to examine the relationship between FMS competency and potential health benefits in children and adolescents. Benefits were defined in terms of psychological, physiological and behavioural outcomes that can impact public health. A systematic search of six electronic databases (EMBASE, OVID MEDLINE, PsycINFO, PubMed, Scopus and SportDiscus®) was conducted on 22 June 2009. Included studies were cross-sectional, longitudinal or experimental studies involving healthy children or adolescents (aged 3-18 years) that quantitatively analysed the relationship between FMS competency and potential benefits. The search identified 21 articles examining the relationship between FMS competency and eight potential benefits (i.e. global self-concept, perceived physical competence, cardio-respiratory fitness [CRF], muscular fitness, weight status, flexibility, physical activity and reduced sedentary behaviour). We found strong evidence for a positive association between FMS competency and physical activity in children and adolescents. There was also a positive relationship between FMS competency and CRF and an inverse association between FMS competency and weight status. Due to an inadequate number of studies, the relationship between FMS competency and the remaining benefits was classified as uncertain. More longitudinal and intervention research examining the relationship between FMS competency and potential psychological, physiological and behavioural outcomes in children and adolescents is recommended.",
"title": ""
},
{
"docid": "a8478fa2a7088c270f1b3370bb06d862",
"text": "Sodium-ion batteries (SIBs) are prospective alternative to lithium-ion batteries for large-scale energy-storage applications, owing to the abundant resources of sodium. Metal sulfides are deemed to be promising anode materials for SIBs due to their low-cost and eco-friendliness. Herein, for the first time, series of copper sulfides (Cu2S, Cu7S4, and Cu7KS4) are controllably synthesized via a facile electrochemical route in KCl-NaCl-Na2S molten salts. The as-prepared Cu2S with micron-sized flakes structure is first investigated as anode of SIBs, which delivers a capacity of 430 mAh g-1 with a high initial Coulombic efficiency of 84.9% at a current density of 100 mA g-1. Moreover, the Cu2S anode demonstrates superior capability (337 mAh g-1 at 20 A g-1, corresponding to 50 C) and ultralong cycle performance (88.2% of capacity retention after 5000 cycles at 5 A g-1, corresponding to 0.0024% of fade rate per cycle). Meanwhile, the pseudocapacitance contribution and robust porous structure in situ formed during cycling endow the Cu2S anodes with outstanding rate capability and enhanced cyclic performance, which are revealed by kinetics analysis and ex situ characterization.",
"title": ""
},
{
"docid": "49d714c778b820fca5946b9a587d1e17",
"text": "The current Web of Data is producing increasingly large RDF datasets. Massive publication efforts of RDF data driven by initiatives like the Linked Open Data movement, and the need to exchange large datasets has unveiled the drawbacks of traditional RDF representations, inspired and designed by a documentcentric and human-readable Web. Among the main problems are high levels of verbosity/redundancy and weak machine-processable capabilities in the description of these datasets. This scenario calls for efficient formats for publication and exchange. This article presents a binary RDF representation addressing these issues. Based on a set of metrics that characterizes the skewed structure of real-world RDF data, we develop a proposal of an RDF representation that modularly partitions and efficiently represents three components of RDF datasets: Header information, a Dictionary, and the actual Triples structure (thus called HDT). Our experimental evaluation shows that datasets in HDT format can be compacted by more than fifteen times as compared to current naive representations, improving both parsing and processing while keeping a consistent publication scheme. Specific compression techniques over HDT further improve these compression rates and prove to outperform existing compression solutions for efficient RDF exchange. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1d1005fe036932695a7706cde950fe75",
"text": "In recent years, the use of mobile ad hoc networks (MANETs) has been widespread in many applications, including some mission critical applications, and as such security has become one of the major concerns in MANETs. Due to some unique characteristics of MANETs, prevention methods alone are not sufficient to make them secure; therefore, detection should be added as another defense before an attacker can breach the system. In general, the intrusion detection techniques for traditional wireless networks are not well suited for MANETs. In this paper, we classify the architectures for intrusion detection systems (IDS) that have been introduced for MANETs. Current IDS’s corresponding to those architectures are also reviewed and compared. We then provide some directions for future research.",
"title": ""
},
{
"docid": "635f090bc5d0bf928640aaaaa1e16861",
"text": "Event-based social networks (EBSNs) provide convenient online platforms for users to organize, attend and share social events. Understanding users’ social influences in social networks can benefit many applications, such as social recommendation and social marketing. In this paper, we focus on the problem of predicting users’ social influences on upcoming events in EBSNs. We formulate this prediction problem as the estimation of unobserved entries of the constructed user-event social influence matrix, where each entry represents the influence value of a user on an event. In particular, we define a user's social influence on a given event as the proportion of the user's friends who are influenced by him/her to attend the event. To solve this problem, we present a hybrid collaborative filtering model, namely, Matrix Factorization with Event-User Neighborhood (MF-EUN) model, by incorporating both event-based and user-based neighborhood methods into matrix factorization. Due to the fact that the constructed social influence matrix is very sparse and the overlap values in the matrix are few, it is challenging to find reliable similar neighbors using the widely adopted similarity measures (e.g., Pearson correlation and Cosine similarity). To address this challenge, we propose an additional information based neighborhood discovery (AID) method by considering both event-specific and user-specific features in EBSNs. The parameters of our MF-EUN model are determined by minimizing the associated regularized squared error function through stochastic gradient descent. We conduct a comprehensive performance evaluation on real-world datasets collected from DoubanEvent. Experimental results show that our proposed hybrid collaborative filtering model is superior than several alternatives, which provides excellent performance with RMSE and MAE reaching 0.248 and 0.1266 respectively in the 90% training data of 10 000",
"title": ""
},
{
"docid": "c263d0c704069ecbdd9d27e9722536e3",
"text": "This paper proposes a chaos-based true random number generator using image as nondeterministic entropy sources. Logistic map is applied to permute and diffuse the image to produce a random sequence after the image is divided to bit-planes. The generated random sequence passes NIST 800-22 test suite with good performance.",
"title": ""
}
] |
scidocsrr
|
a78be6c9a0927113b9fa7925014fab58
|
End-to-end visual speech recognition with LSTMS
|
[
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "7d78ca30853ed8a84bbb56fe82e3b9ba",
"text": "Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21% relative over a baseline multi-stream audio-visual GMM/HMM system.",
"title": ""
}
] |
[
{
"docid": "14b48440dd0b797cec04bbc249ee9940",
"text": "T cells use integrins in essentially all of their functions. They use integrins to migrate in and out of lymph nodes and, following infection, to migrate into other tissues. At the beginning of an immune response, integrins also participate in the immunological synapse formed between T cells and antigen-presenting cells. Because the ligands for integrins are widely expressed, integrin activity on T cells must be tightly controlled. Integrins become active following signalling through other membrane receptors, which cause both affinity alteration and an increase in integrin clustering. Lipid raft localization may increase integrin activity. Signalling pathways involving ADAP, Vav-1 and SKAP-55, as well as Rap1 and RAPL, cause clustering of leukocyte function-associated antigen-1 (LFA-1; integrin alphaLbeta2). T-cell integrins can also signal, and the pathways dedicated to the migratory activity of T cells have been the most investigated so far. Active LFA-1 causes T-cell attachment and lamellipodial movement induced by myosin light chain kinase at the leading edge, whereas RhoA and ROCK cause T-cell detachment at the trailing edge. Another important signalling pathway acts through CasL/Crk, which might regulate the activity of the GTPases Rac and Rap1 that have important roles in T-cell migration.",
"title": ""
},
{
"docid": "541075ddb29dd0acdf1f0cf3784c220a",
"text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the stateof-the-arts performance. 1",
"title": ""
},
{
"docid": "c071d5a7ff1dbfd775e9ffdee1b07662",
"text": "OBJECTIVES\nComplete root coverage is the primary objective to be accomplished when treating gingival recessions in patients with aesthetic demands. Furthermore, in order to satisfy patient demands fully, root coverage should be accomplished by soft tissue, the thickness and colour of which should not be distinguishable from those of adjacent soft tissue. The aim of the present split-mouth study was to compare the treatment outcome of two surgical approaches of the bilaminar procedure in terms of (i) root coverage and (ii) aesthetic appearance of the surgically treated sites.\n\n\nMATERIAL AND METHODS\nFifteen young systemically and periodontally healthy subjects with two recession-type defects of similar depth affecting contralateral teeth in the aesthetic zone of the maxilla were enrolled in the study. All recessions fall into Miller class I or II. Randomization for test and control treatment was performed by coin toss immediately prior to surgery. All defects were treated with a bilaminar surgical technique: differences between test and control sites resided in the size, thickness and positioning of the connective tissue graft. The clinical re-evaluation was made 1 year after surgery.\n\n\nRESULTS\nThe two bilaminar techniques resulted in a high percentage of root coverage (97.3% in the test and 94.7% in the control group) and complete root coverage (gingival margin at the cemento-enamel junction (CEJ)) (86.7% in the test and 80% in the control teeth), with no statistically significant difference between them. Conversely, better aesthetic outcome and post-operative course were indicated by the patients for test compared to control sites.\n\n\nCONCLUSIONS\nThe proposed modification of the bilaminar technique improved the aesthetic outcome. The reduced size and minimal thickness of connective tissue graft, together with its positioning apical to the CEJ, facilitated graft coverage by means of the coronally advanced flap.",
"title": ""
},
{
"docid": "ab50f458d919ba3ac3548205418eea62",
"text": "Department of Microbiology, School of Life Sciences, Bharathidasan University, Tiruchirappali 620 024, Tamilnadu, India. Department of Medical Biotechnology, Sri Ramachandra University, Porur, Chennai 600 116, Tamilnadu, India. CAS Marine Biology, Annamalai University, Parangipettai 608 502, Tamilnadu, India. Department of Zoology, DDE, Annamalai University, Annamalai Nagar 608 002, Tamilnadu, India Asian Pacific Journal of Tropical Disease (2012)S291-S295",
"title": ""
},
{
"docid": "531ac7d6500373005bae464c49715288",
"text": "We have used acceleration sensors to monitor the heart motion during surgery. A three-axis accelerometer was made from two commercially available two-axis sensors, and was used to measure the heart motion in anesthetized pigs. The heart moves due to both respiration and heart beating. The heart beating was isolated from respiration by high-pass filtering at 1.0 Hz, and heart wall velocity and position were calculated by numerically integrating the filtered acceleration traces. The resulting curves reproduced the heart motion in great detail, noise was hardly visible. Events that occurred during the measurements, e.g. arrhythmias and fibrillation, were recognized in the curves, and confirmed by comparison with synchronously recorded ECG data. We conclude that acceleration sensors are able to measure heart motion with good resolution, and that such measurements can reveal patterns that may be an indication of heart circulation failure.",
"title": ""
},
{
"docid": "b5eafe60989c0c4265fa910c79bbce41",
"text": "Little research has addressed IT professionals’ script debugging strategies, or considered whether there may be gender differences in these strategies. What strategies do male and female scripters use and what kinds of mechanisms do they employ to successfully fix bugs? Also, are scripters’ debugging strategies similar to or different from those of spreadsheet debuggers? Without the answers to these questions, tool designers do not have a target to aim at for supporting how male and female scripters want to go about debugging. We conducted a think-aloud study to bridge this gap. Our results include (1) a generalized understanding of debugging strategies used by spreadsheet users and scripters, (2) identification of the multiple mechanisms scripters employed to carry out the strategies, and (3) detailed examples of how these debugging strategies were employed by males and females to successfully fix bugs.",
"title": ""
},
{
"docid": "8505afb27c5ef73baeaa53dfe1c337ae",
"text": "The Osprey (Pandion haliaetus) is one of only six bird species with an almost world-wide distribution. We aimed at clarifying its phylogeographic structure and elucidating its taxonomic status (as it is currently separated into four subspecies). We tested six biogeographical scenarios to explain how the species’ distribution and differentiation took place in the past and how such a specialized raptor was able to colonize most of the globe. Using two mitochondrial genes (cyt b and ND2), the Osprey appeared structured into four genetic groups representing quasi non-overlapping geographical regions. The group Indo-Australasia corresponds to the cristatus ssp, as well as the group Europe-Africa to the haliaetus ssp. In the Americas, we found a single lineage for both carolinensis and ridgwayi ssp, whereas in north-east Asia (Siberia and Japan), we discovered a fourth new lineage. The four lineages are well differentiated, contrasting with the low genetic variability observed within each clade. Historical demographic reconstructions suggested that three of the four lineages experienced stable trends or slight demographic increases. Molecular dating estimates the initial split between lineages at about 1.16 Ma ago, in the Early Pleistocene. Our biogeographical inference suggests a pattern of colonization from the American continent towards the Old World. Populations of the Palearctic would represent the last outcomes of this colonization. At a global scale the Osprey complex may be composed of four different Evolutionary Significant Units, which should be treated as specific management units. Our study brought essential genetic clarifications, which have implications for conservation strategies in identifying distinct lineages across which birds should not be artificially moved through exchange/reintroduction schemes.",
"title": ""
},
{
"docid": "eb0ec729796a93f36d348e70e3fa9793",
"text": "This paper proposes a novel approach to measure the object size using a regular digital camera. Nowadays, the remote object-size measurement is very crucial to many multimedia applications. Our proposed computer-aided automatic object-size measurement technique is based on a new depth-information extraction (range finding) scheme using a regular digital camera. The conventional range finders are often carried out using the passive method such as stereo cameras or the active method such as ultrasonic and infrared equipment. They either require the cumbersome set-up or deal with point targets only. The proposed approach requires only a digital camera with certain image processing techniques and relies on the basic principles of visible light. Experiments are conducted to evaluate the performance of our proposed new object-size measurement mechanism. The average error-percentage of this method is below 2%. It demonstrates the striking effectiveness of our proposed new method.",
"title": ""
},
{
"docid": "21961041e3bf66d7e3f004c65ddc5da2",
"text": "A novel high step-up converter is proposed for a front-end photovoltaic system. Through a voltage multiplier module, an asymmetrical interleaved high step-up converter obtains high step-up gain without operating at an extreme duty ratio. The voltage multiplier module is composed of a conventional boost converter and coupled inductors. An extra conventional boost converter is integrated into the first phase to achieve a considerably higher voltage conversion ratio. The two-phase configuration not only reduces the current stress through each power switch, but also constrains the input current ripple, which decreases the conduction losses of metal-oxide-semiconductor field-effect transistors (MOSFETs). In addition, the proposed converter functions as an active clamp circuit, which alleviates large voltage spikes across the power switches. Thus, the low-voltage-rated MOSFETs can be adopted for reductions of conduction losses and cost. Efficiency improves because the energy stored in leakage inductances is recycled to the output terminal. Finally, the prototype circuit with a 40-V input voltage, 380-V output, and 1000- W output power is operated to verify its performance. The highest efficiency is 96.8%.",
"title": ""
},
{
"docid": "2a818337c472caa1e693edb05722954b",
"text": "UNLABELLED\nThis study focuses on the relationship between classroom ventilation rates and academic achievement. One hundred elementary schools of two school districts in the southwest United States were included in the study. Ventilation rates were estimated from fifth-grade classrooms (one per school) using CO(2) concentrations measured during occupied school days. In addition, standardized test scores and background data related to students in the classrooms studied were obtained from the districts. Of 100 classrooms, 87 had ventilation rates below recommended guidelines based on ASHRAE Standard 62 as of 2004. There is a linear association between classroom ventilation rates and students' academic achievement within the range of 0.9-7.1 l/s per person. For every unit (1 l/s per person) increase in the ventilation rate within that range, the proportion of students passing standardized test (i.e., scoring satisfactory or above) is expected to increase by 2.9% (95%CI 0.9-4.8%) for math and 2.7% (0.5-4.9%) for reading. The linear relationship observed may level off or change direction with higher ventilation rates, but given the limited number of observations, we were unable to test this hypothesis. A larger sample size is needed for estimating the effect of classroom ventilation rates higher than 7.1 l/s per person on academic achievement.\n\n\nPRACTICAL IMPLICATIONS\nThe results of this study suggest that increasing the ventilation rates toward recommended guideline ventilation rates in classrooms should translate into improved academic achievement of students. More studies are needed to fully understand the relationships between ventilation rate, other indoor environmental quality parameters, and their effects on students' health and achievement. Achieving the recommended guidelines and pursuing better understanding of the underlying relationships would ultimately support both sustainable and productive school environments for students and personnel.",
"title": ""
},
{
"docid": "bcab7b2f12f72c6db03446046586381e",
"text": "The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy are actively researched, there is still little focus on detective controls related to cloud accountability and audit ability. The complexity resulting from large-scale virtualization and data distribution carried out in current clouds has revealed an urgent research agenda for cloud accountability, as has the shift in focus of customer concerns from servers to data. This paper discusses key issues and challenges in achieving a trusted cloud through the use of detective controls, and presents the Trust Cloud framework, which addresses accountability in cloud computing via technical and policy-based approaches.",
"title": ""
},
{
"docid": "8f449e62b300c4c8ff62306d02f2f820",
"text": "The effects of adrenal corticosteroids on subsequent adrenocorticotropin secretion are complex. Acutely (within hours), glucocorticoids (GCs) directly inhibit further activity in the hypothalamo-pituitary-adrenal axis, but the chronic actions (across days) of these steroids on brain are directly excitatory. Chronically high concentrations of GCs act in three ways that are functionally congruent. (i) GCs increase the expression of corticotropin-releasing factor (CRF) mRNA in the central nucleus of the amygdala, a critical node in the emotional brain. CRF enables recruitment of a chronic stress-response network. (ii) GCs increase the salience of pleasurable or compulsive activities (ingesting sucrose, fat, and drugs, or wheel-running). This motivates ingestion of \"comfort food.\" (iii) GCs act systemically to increase abdominal fat depots. This allows an increased signal of abdominal energy stores to inhibit catecholamines in the brainstem and CRF expression in hypothalamic neurons regulating adrenocorticotropin. Chronic stress, together with high GC concentrations, usually decreases body weight gain in rats; by contrast, in stressed or depressed humans chronic stress induces either increased comfort food intake and body weight gain or decreased intake and body weight loss. Comfort food ingestion that produces abdominal obesity, decreases CRF mRNA in the hypothalamus of rats. Depressed people who overeat have decreased cerebrospinal CRF, catecholamine concentrations, and hypothalamo-pituitary-adrenal activity. We propose that people eat comfort food in an attempt to reduce the activity in the chronic stress-response network with its attendant anxiety. These mechanisms, determined in rats, may explain some of the epidemic of obesity occurring in our society.",
"title": ""
},
{
"docid": "3e691cf6055eb564dedca955b816a654",
"text": "Many Internet-based services have already been ported to the mobile-based environment, embracing the new services is therefore critical to deriving revenue for services providers. Based on a valence framework and trust transfer theory, we developed a trust-based customer decision-making model of the non-independent, third-party mobile payment services context. We empirically investigated whether a customer’s established trust in Internet payment services is likely to influence his or her initial trust in mobile payment services. We also examined how these trust beliefs might interact with both positive and negative valence factors and affect a customer’s adoption of mobile payment services. Our SEM analysis indicated that trust indeed had a substantial impact on the cross-environment relationship and, further, that trust in combination with the positive and negative valence determinants directly and indirectly influenced behavioral intention. In addition, the magnitudes of these effects on workers and students were significantly different from each other. 2011 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +86 27 8755 8100; fax: +86 27 8755 6437. E-mail addresses: [email protected] (Y. Lu), [email protected] (S. Yang), [email protected] (Patrick Y.K. Chau), [email protected] (Y. Cao). 1 Tel.: +86 27 8755 6448. 2 Tel.: +852 2859 1025. 3 Tel.: +86 27 8755 8100.",
"title": ""
},
{
"docid": "84a01029714dfef5d14bc4e2be78921e",
"text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.",
"title": ""
},
{
"docid": "d0c940a651b1231c6ef4f620e7acfdcc",
"text": "Harvard Business School Working Paper Number 05-016. Working papers are distributed in draft form for purposes of comment and discussion only. They may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author(s). Abstract Much recent research has pointed to the critical role of architecture in the development of a firm's products, services and technical capabilities. A common theme in these studies is the notion that specific characteristics of a product's design – for example, the degree of modularity it exhibits – can have a profound effect on among other things, its performance, the flexibility of the process used to produce it, the value captured by its producer, and the potential for value creation at the industry level. Unfortunately, this stream of work has been limited by the lack of appropriate tools, metrics and terminology for characterizing key attributes of a product's architecture in a robust fashion. As a result, there is little empirical evidence that the constructs emerging in the literature have power in predicting the phenomena with which they are associated. This paper reports data from a research project which seeks to characterize the differences in design structure between complex software products. In particular, we adopt a technique based upon Design Structure Matrices (DSMs) to map the dependencies between different elements of a design then develop metrics that allow us to compare the structures of these different DSMs. We demonstrate the power of this approach in two ways: First, we compare the design structures of two complex software products – the Linux operating system and the Mozilla web browser – that were developed via contrasting modes of organization: specifically, open source versus proprietary development. We find significant differences in their designs, consistent with an interpretation that Linux possesses a more \" modular \" architecture. We then track the evolution of Mozilla, paying particular attention to a major \" redesign \" effort that took place several months after its release as an open source product. We show that this effort resulted in a design structure that was significantly more modular than its predecessor, and indeed, more modular than that of a comparable version of Linux. Our findings demonstrate that it is possible to characterize the structure of complex product designs and draw meaningful conclusions about the precise ways in which they differ. We provide a description of a set of tools …",
"title": ""
},
{
"docid": "0dbca0a2aec1b27542463ff80fc4f59d",
"text": "An emerging research area named Learning-to-Rank (LtR) has shown that effective solutions to the ranking problem can leverage machine learning techniques applied to a large set of features capturing the relevance of a candidate document for the user query. Large-scale search systems must however answer user queries very fast, and the computation of the features for candidate documents must comply with strict back-end latency constraints. The number of features cannot thus grow beyond a given limit, and Feature Selection (FS) techniques have to be exploited to find a subset of features that both meets latency requirements and leads to high effectiveness of the trained models. In this paper, we propose three new algorithms for FS specifically designed for the LtR context where hundreds of continuous or categorical features can be involved. We present a comprehensive experimental analysis conducted on publicly available LtR datasets and we show that the proposed strategies outperform a well-known state-of-the-art competitor.",
"title": ""
},
{
"docid": "5757d96fce3e0b3b3303983b15d0030d",
"text": "Malicious applications pose a threat to the security of the Android platform. The growing amount and diversity of these applications render conventional defenses largely ineffective and thus Android smartphones often remain unprotected from novel malware. In this paper, we propose DREBIN, a lightweight method for detection of Android malware that enables identifying malicious applications directly on the smartphone. As the limited resources impede monitoring applications at run-time, DREBIN performs a broad static analysis, gathering as many features of an application as possible. These features are embedded in a joint vector space, such that typical patterns indicative for malware can be automatically identified and used for explaining the decisions of our method. In an evaluation with 123,453 applications and 5,560 malware samples DREBIN outperforms several related approaches and detects 94% of the malware with few false alarms, where the explanations provided for each detection reveal relevant properties of the detected malware. On five popular smartphones, the method requires 10 seconds for an analysis on average, rendering it suitable for checking downloaded applications directly on the device.",
"title": ""
},
{
"docid": "3038afba11844c31fefc30a8245bc61c",
"text": "Frame duplication is to duplicate a sequence of consecutive frames and insert or replace to conceal or imitate a specific event/content in the same source video. To automatically detect the duplicated frames in a manipulated video, we propose a coarse-to-fine deep convolutional neural network framework to detect and localize the frame duplications. We first run an I3D network [2] to obtain the most candidate duplicated frame sequences and selected frame sequences, and then run a Siamese network with ResNet network [6] to identify each pair of a duplicated frame and the corresponding selected frame. We also propose a heuristic strategy to formulate the video-level score. We then apply our inconsistency detector fine-tuned on the I3D network to distinguish duplicated frames from selected frames. With the experimental evaluation conducted on two video datasets, we strongly demonstrate that our proposed method outperforms the current state-of-the-art methods.",
"title": ""
},
{
"docid": "af5fe4ecd02d320477e2772d63b775dd",
"text": "Background: Blockchain technology is recently receiving a lot of attention from researchers as well as from many different industries. There are promising application areas for the logistics sector like digital document exchange and tracking of goods, but there is no existing research on these topics. This thesis aims to contribute to the research of information systems in logistics in combination with Blockchain technology. Purpose: The purpose of this research is to explore the capabilities of Blockchain technology regarding the concepts of privacy, transparency and trust. In addition, the requirements of information systems in logistics regarding the mentioned concepts are studied and brought in relation to the capabilities of Blockchain technology. The goal is to contribute to a theoretical discussion on the role of Blockchain technology in improving the flow of goods and the flow of information in logistics. Method: The research is carried out in the form of an explorative case study. Blockchain technology has not been studied previously in a logistics setting and therefore, an inductive research approach is chosen by using thematic analysis. The case study is based on a pilot test which had the goal to facilitate a Blockchain to exchange documents and track shipments. Conclusion: The findings reflect that the research on Blockchain technology is still in its infancy and that it still takes several years to facilitate the technology in a productive environment. The Blockchain has the capabilities to meet the requirements of information systems in logistics due to the ability to create trust and establish an organisation overarching platform to exchange information.",
"title": ""
}
] |
scidocsrr
|
6f4ab31fca22f899dedcb84ea87a7ac2
|
Identifying Speakers and Listeners of Quoted Speech in Literary Works
|
[
{
"docid": "67992d0c0b5f32726127855870988b01",
"text": "We present a method for extracting social networks from literature, namely, nineteenth-century British novels and serials. We derive the networks from dialogue interactions, and thus our method depends on the ability to determine when two characters are in conversation. Our approach involves character name chunking, quoted speech attribution and conversation detection given the set of quotes. We extract features from the social networks and examine their correlation with one another, as well as with metadata such as the novel’s setting. Our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars. Instead, our results suggest an alternative explanation for differences in social networks.",
"title": ""
}
] |
[
{
"docid": "e78e70d347fb76a79755442cabe1fbe0",
"text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.",
"title": ""
},
{
"docid": "005ee252d6a89d75f8200ffe1f64f2c0",
"text": "Traditionally, a distinction is made between that what is as serted by uttering a sentence and that what is presupposed. Presuppositions are characte rized as those propositions which persist even if the sentence which triggers them is neg ated. Thus ‘The king of France is bald’ presupposes that there is a king of France , since this follows from both ‘The king of France is bald’ and ‘It is not the case that the kin g of France is bald’. Stalnaker (1974) put forward the idea that a presupposition of an asserted sentence is a piece of information which is assumed by the speaker to be part of the common background of the speaker and interpreter. The presuppositions as anaphors theory of Van der Sandt (1992) — currently the best theory of presuppos ition as far as empirical predictions are concerned (Beaver 1997:983)— can be seen as o e advanced realization of Stalnaker’s basic idea. The main insight of Van der Sa ndt is that there is an interesting correspondence between the behaviour of anaph oric pronouns in discourse and the projection of presuppositions (i.e., whether and ho w presuppositions survive in complex sentences). Like most research in this area, Van d er Sandt’s work concentrates on the interaction between presuppositions and the linguistic context (i.e., the preceding sentences). However, not only linguistic contex t interacts with presuppositions. Consider:",
"title": ""
},
{
"docid": "e4d38d8ef673438e9ab231126acfda99",
"text": "The trend toward physically dispersed work groups has necessitated a fresh inquiry into the role and nature of team leadership in virtual settings. To accomplish this, we assembled thirteen culturally diverse global teams from locations in Europe, Mexico, and the United States, assigning each team a project leader and task to complete. The findings suggest that effective team leaders demonstrate the capability to deal with paradox and contradiction by performing multiple leadership roles simultaneously (behavioral complexity). Specifically, we discovered that highly effective virtual team leaders act in a mentoring role and exhibit a high degree of understanding (empathy) toward other team members. At the same time, effective leaders are also able to assert their authority without being perceived as overbearing or inflexible. Finally, effective leaders are found to be extremely effective at providing regular, detailed, and prompt communication with their peers and in articulating role relationships (responsibilities) among the virtual team members. This study provides useful insights for managers interested in developing global virtual teams, as well as for academics interested in pursuing virtual team research. 8 KAYWORTH AND LEIDNER",
"title": ""
},
{
"docid": "d1c33990b7642ea51a8a568fa348d286",
"text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.",
"title": ""
},
{
"docid": "80244987f22f9fe3f69fddc8af5ded5b",
"text": "In the online voting system, people can vote through the internet. In order to prevent voter frauds, we use two levels of security. In the first level of security, the face of the voter is captured by a web camera and sent to the database. Later, the face of the person is verified with the face present in the database and validated using Matlab. The comparison of the two faces is done using Local Binary Pattern algorithm. The scheme is based on a merging an image and assigns a value of a central pixel. These central pixels are labeled either 0 or 1. If the value is a lower pixel, a histogram of the labels is computed and used as a descriptor. LBP results are combined together to create one vector representing the entire face image. A password (OTP) is used as the second level of security, after entering the one time password generated to their mail it is verified and allow to vote. It should be noted that with this system in place, the users, in this case, shall be given an ample time during the voting period. They shall also be trained on how to vote online before the election time.",
"title": ""
},
{
"docid": "c8911f38bfd68baa54b49b9126c2ad22",
"text": "This document presents a performance comparison of three 2D SLAM techniques available in ROS: Gmapping, Hec-torSLAM and CRSM SLAM. These algorithms were evaluated using a Roomba 645 robotic platform with differential drive and a RGB-D Kinect sensor as an emulator of a scanner lasser. All tests were realized in static indoor environments. To improve the quality of the maps, some rosbag files were generated and used to build the maps in an off-line way.",
"title": ""
},
{
"docid": "370054a58b8f50719106508b138bd095",
"text": "In-network aggregation has been proposed as one method for reducing energy consumption in sensor networks. In this paper, we explore two ideas related to further reducing energy consumption in the context of in-network aggregation. The first is by influencing the construction of the routing trees for sensor networks with the goal of reducing the size of transmitted data. To this end, we propose a group-aware network configuration method that “clusters” along the same path sensor nodes that belong to the same group. The second idea involves imposing a hierarchy of output filters on the sensor network with the goal of both reducing the size of transmitted data and minimizing the number of transmitted messages. More specifically, we propose a framework to use temporal coherency tolerances in conjunction with in-network aggregation to save energy at the sensor nodes while maintaining specified quality of data. These tolerances are based on user preferences or can be dictated by the network in cases where the network cannot support the current tolerance level. Our framework, called TiNA, works on top of existing in-network aggregation schemes. We evaluate experimentally our proposed schemes in the context of existing in-network aggregation schemes. We present experimental results measuring energy consumption, response time, and quality of data for Group-By queries. Overall, our schemes provide significant energy savings with respect to communication and a negligible drop in quality of data.",
"title": ""
},
{
"docid": "72420289372499b50e658ef0957a3ad9",
"text": "A ripple current cancellation technique injects AC current into the output voltage bus of a converter that is equal and opposite to the normal converter ripple current. The output current ripple is ideally zero, leading to ultra-low noise converter output voltages. The circuit requires few additional components, no active circuits are required. Only an additional filter inductor winding, an auxiliary inductor, and small capacitor are required. The circuit utilizes leakage inductance of the modified filter inductor as all or part of the required auxiliary inductance. Ripple cancellation is independent of switching frequency, duty cycle, and other converter parameters. The circuit eliminates ripple current in both continuous conduction mode and discontinuous conduction mode. Experimental results provide better than an 80/spl times/ ripple current reduction.",
"title": ""
},
{
"docid": "3189fa20d605bf31c404b0327d74da79",
"text": "We now see an increasing number of self-tracking apps and wearable devices. Despite the vast number of available tools, however, it is still challenging for self-trackers to find apps that suit their unique tracking needs, preferences, and commitments. Furthermore, people are bounded by the tracking tools’ initial design because it is difficult to modify, extend, or mash up existing tools. In this paper, we present OmniTrack, a mobile self-tracking system, which enables self-trackers to construct their own trackers and customize tracking items to meet their individual tracking needs. To inform the OmniTrack design, we first conducted semi-structured interviews (N = 12) and analyzed existing mobile tracking apps (N = 62). We then designed and developed OmniTrack as an Android mobile app, leveraging a semi-automated tracking approach that combines manual and automated tracking methods. We evaluated OmniTrack through a usability study (N = 10) and improved its interfaces based on the feedback. Finally, we conducted a 3-week deployment study (N = 21) to assess if people can capitalize on OmniTrack’s flexible and customizable design to meet their tracking needs. From the study, we showed how participants used OmniTrack to create, revise, and appropriate trackers—ranging from a simple mood tracker to a sophisticated daily activity tracker. We discuss how OmniTrack positively influences and supports self-trackers’ tracking practices over time, and how to further improve OmniTrack by providing more appropriate visualizations and sharable templates, incorporating external contexts, and supporting researchers’ unique data collection needs.",
"title": ""
},
{
"docid": "70358147741dda2d10fdd2d103af9b3a",
"text": "Semi-structured documents (e.g. journal art,icles, electronic mail, television programs, mail order catalogs, . ..) a.re often not explicitly typed; the only available t,ype information is the implicit structure. An explicit t,ype, however, is needed in order to a.pply objectoriented technology, like type-specific methods. In this paper, we present a.n experimental vector space cla.ssifier for determining the type of semi-structured documents. Our goal was to design a. high-performa.nce classifier in t,erms of accuracy (recall and precision), speed, and extensibility.",
"title": ""
},
{
"docid": "b22a05d39ba34d581f0d809e89850520",
"text": "Due to recent financial crises and regulatory concerns, financial intermediaries' credit risk assessment is an area of renewed interest in both the academic world and the business community. In this paper, we propose a new fuzzy support vector machine to discriminate good creditors from bad ones. Because in credit scoring areas we usually cannot label one customer as absolutely good who is sure to repay in time, or absolutely bad who will default certainly, our new fuzzy support vector machine treats every sample as both positive and negative classes, but with different memberships. By this way we expect the new fuzzy support vector machine to have more generalization ability, while preserving the merit of insensitive to outliers, as the fuzzy support vector machine (SVM) proposed in previous papers. We reformulate this kind of two-group classification problem into a quadratic programming problem. Empirical tests on three public datasets show that it can have better discriminatory power than the standard support vector machine and the fuzzy support vector machine if appropriate kernel and membership generation method are chosen.",
"title": ""
},
{
"docid": "eb32ce661a0d074ce90861793a2e4de7",
"text": "A new transfer function from control voltage to duty cycle, the closed-current loop, which captures the natural sampling effect is used to design a controller for the voltage-loop of a pulsewidth modulated (PWM) dc-dc converter operating in continuous-conduction mode (CCM) with peak current-mode control (PCM). This paper derives the voltage loop gain and the closed-loop transfer function from reference voltage to output voltage. The closed-loop transfer function from the input voltage to the output voltage, or the closed-loop audio-susceptibility is derived. The closed-loop transfer function from output current to output voltage, or the closed loop output impedance is also derived. The derivation is performed using an averaged small-signal model of the example boost converter for CCM. Experimental verification is presented. The theoretical and experimental results were in good agreement, confirming the validity of the transfer functions derived.",
"title": ""
},
{
"docid": "f670178ac943bbcc17978a0091159c7f",
"text": "In this article, we present the first academic comparable corpus involving written French and French Sign Language. After explaining our initial motivation to build a parallel set of such data, especially in the context of our work on Sign Language modelling and our prospect of machine translation into Sign Language, we present the main problems posed when mixing language channels and modalities (oral, written, signed), discussing the translation-vs-interpretation narrative in particular. We describe the process followed to guarantee feature coverage and exploitable results despite a serious cost limitation, the data being collected from professional translations. We conclude with a few uses and prospects of the corpus.",
"title": ""
},
{
"docid": "fe753c4be665700ac15509c4b831309c",
"text": "Elements of Successful Digital Transformation12 New digital technologies, particularly what we refer to as SMACIT3 (social, mobile, analytics, cloud and Internet of things [IoT]) technologies, present both game-changing opportunities and existential threats to big old companies. GE’s “industrial internet” and Philips’ digital platform for personalized healthcare information represent bets made by big old companies attempting to cash",
"title": ""
},
{
"docid": "486b140009524e48da94712191dba78e",
"text": "The concept of holistic processing is a cornerstone of face-recognition research. In the study reported here, we demonstrated that holistic processing predicts face-recognition abilities on the Cambridge Face Memory Test and on a perceptual face-identification task. Our findings validate a large body of work that relies on the assumption that holistic processing is related to face recognition. These findings also reconcile the study of face recognition with the perceptual-expertise work it inspired; such work links holistic processing of objects with people's ability to individuate them. Our results differ from those of a recent study showing no link between holistic processing and face recognition. This discrepancy can be attributed to the use in prior research of a popular but flawed measure of holistic processing. Our findings salvage the central role of holistic processing in face recognition and cast doubt on a subset of the face-perception literature that relies on a problematic measure of holistic processing.",
"title": ""
},
{
"docid": "809384abcd6e402c1b30c3d2dfa75aa1",
"text": "Traditionally, psychiatry has offered clinical insights through keen behavioral observation and a deep study of emotion. With the subsequent biological revolution in psychiatry displacing psychoanalysis, some psychiatrists were concerned that the field shifted from “brainless” to “mindless.”1 Over the past 4 decades, behavioral expertise, once the strength of psychiatry, has diminished in importanceaspsychiatricresearchfocusedonpharmacology,genomics, and neuroscience, and much of psychiatric practicehasbecomeaseriesofbriefclinical interactionsfocused on medication management. In research settings, assigning a diagnosis from the Diagnostic and Statistical Manual of Mental Disorders has become a surrogate for behavioral observation. In practice, few clinicians measure emotion, cognition, or behavior with any standard, validated tools. Some recent changes in both research and practice are promising. The National Institute of Mental Health has led an effort to create a new diagnostic approach for researchers that is intended to combine biological, behavioral, and social factors to create “precision medicine for psychiatry.”2 Although this Research Domain Criteria project has been controversial, the ensuing debate has been",
"title": ""
},
{
"docid": "404f1c68c097c74b120189af67bf00f5",
"text": "In 1991, a novel robot, MIT-MANUS, was introduced to study the potential that robots might assist in and quantify the neuro-rehabilitation of motor function. MIT-MANUS proved an excellent tool for shoulder and elbow rehabilitation in stroke patients, showing in clinical trials a reduction of impairment in movements confined to the exercised joints. This successful proof of principle as to additional targeted and intensive movement treatment prompted a test of robot training examining other limb segments. This paper focuses on a robot for wrist rehabilitation designed to provide three rotational degrees-of-freedom. The first clinical trial of the device will enroll 200 stroke survivors. Ultimately 160 stroke survivors will train with both the proximal shoulder and elbow MIT-MANUS robot, as well as with the novel distal wrist robot, in addition to 40 stroke survivor controls. So far 52 stroke patients have completed the robot training (ongoing protocol). Here, we report on the initial results on 36 of these volunteers. These results demonstrate that further improvement should be expected by adding additional training to other limb segments.",
"title": ""
},
{
"docid": "b408788cd974438f32c1858cda9ff910",
"text": "Speaking as someone who has personally felt the influence of the “Chomskian Turn”, I believe that one of Chomsky’s most significant contributions to Psychology, or as it is now called, Cognitive Science was to bring back scientific realism. This may strike you as a very odd claim, for one does not usually think of science as needing to be talked into scientific realism. Science is, after all, the study of reality by the most precise instruments of measurement and analysis that humans have developed.",
"title": ""
},
{
"docid": "90b2d777eeac2466293c60ba699ea76b",
"text": "As autonomous vehicles become an every-day reality, high-accuracy pedestrian detection is of paramount practical importance. Pedestrian detection is a highly researched topic with mature methods, but most datasets (for both training and evaluation) focus on common scenes of people engaged in typical walking poses on sidewalks. But performance is most crucial for dangerous scenarios that are rarely observed, such as children playing in the street and people using bicycles/skateboards in unexpected ways. Such in-the-tail data is notoriously hard to observe, making both training and testing difficult. To analyze this problem, we have collected a novel annotated dataset of dangerous scenarios called the Precarious Pedestrian dataset. Even given a dedicated collection effort, it is relatively small by contemporary standards (≈ 1000 images). To explore large-scale data-driven learning, we explore the use of synthetic data generated by a game engine. A significant challenge is selected the right priors or parameters for synthesis: we would like realistic data with realistic poses and object configurations. Inspired by Generative Adversarial Networks, we generate a massive amount of synthetic data and train a discriminative classifier to select a realistic subset (that fools the classifier), which we deem Synthetic Imposters. We demonstrate that this pipeline allows one to generate realistic (or adverserial) training data by making use of rendering/animation engines. Interestingly, we also demonstrate that such data can be used to rank algorithms, suggesting that Synthetic Imposters can also be used for in-the-tail validation at test-time, a notoriously difficult challenge for real-world deployment.",
"title": ""
},
{
"docid": "2c4fed71ee9d658516b017a924ad6589",
"text": "As the concept of Friction stir welding is relatively new, there are many areas, which need thorough investigation to optimize and make it commercially viable. In order to obtain the desired mechanical properties, certain process parameters, like rotational and translation speeds, tool tilt angle, tool geometry etc. are to be controlled. Aluminum alloys of 5xxx series and their welded joints show good resistance to corrosion in sea water. Here, a literature survey has been carried out for the friction stir welding of 5xxx series aluminum alloys.",
"title": ""
}
] |
scidocsrr
|
443b689900fc69a1a256fb30af2036e5
|
SYSTEMS CONTINUANCE : AN EXPECTATION-CONFIRMATION MODEL 1 By :
|
[
{
"docid": "6c2afcf5d7db0f5d6baa9d435c203f8a",
"text": "An attempt to extend current thinking on postpurchase response to include attribute satisfaction and dissatisfaction as separate determinants not fully reflected in either cognitive (i.e.. expectancy disconfirmation) or affective paradigms is presented. In separate studies of automobile satisfaction and satisfaction with course instruction, respondents provided the nature of emotional experience, disconfirmation perceptions, and separate attribute satisfaction and dissatisfaction judgments. Analysis confirmed the disconfirmation effect and tbe effects of separate dimensions of positive and negative affect and also suggested a multidimensional structure to the affect dimensions. Additionally, attribute satisfaction and dissatisfaction were significantly related to positive and negative affect, respectively, and to overall satisfaction. It is suggested that all dimensions tested are needed for a full accounting of postpurchase responses in usage.",
"title": ""
}
] |
[
{
"docid": "b3e90fdfda5346544f769b6dd7c3882b",
"text": "Bromelain is a complex mixture of proteinases typically derived from pineapple stem. Similar proteinases are also present in pineapple fruit. Beneficial therapeutic effects of bromelain have been suggested or proven in several human inflammatory diseases and animal models of inflammation, including arthritis and inflammatory bowel disease. However, it is not clear how each of the proteinases within bromelain contributes to its anti-inflammatory effects in vivo. Previous in vivo studies using bromelain have been limited by the lack of assays to control for potential differences in the composition and proteolytic activity of this naturally derived proteinase mixture. In this study, we present model substrate assays and assays for cleavage of bromelain-sensitive cell surface molecules can be used to assess the activity of constituent proteinases within bromelain without the need for biochemical separation of individual components. Commercially available chemical and nutraceutical preparations of bromelain contain predominately stem bromelain. In contrast, the proteinase activity of pineapple fruit reflects its composition of fruit bromelain>ananain approximately stem bromelain. Concentrated bromelain solutions (>50 mg/ml) are more resistant to spontaneous inactivation of their proteolytic activity than are dilute solutions, with the proteinase stability in the order of stem bromelain>fruit bromelain approximately ananain. The proteolytic activity of concentrated bromelain solutions remains relatively stable for at least 1 week at room temperature, with minimal inactivation by multiple freeze-thaw cycles or exposure to the digestive enzyme trypsin. The relative stability of concentrated versus dilute bromelain solutions to inactivation under physiologically relevant conditions suggests that delivery of bromelain as a concentrated bolus would be the preferred method to maximize its proteolytic activity in vivo.",
"title": ""
},
{
"docid": "135d451e66cdc8d47add47379c1c35f9",
"text": "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind denoising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.",
"title": ""
},
{
"docid": "9a9fd442bc7353d9cd202e9ace6e6580",
"text": "The idea of developmental dyspraxia has been discussed in the research literature for almost 100 years. However, there continues to be a lack of consensus regarding both the definition and description of this disorder. This paper presents a neuropsychologically based operational definition of developmental dyspraxia that emphasizes that developmental dyspraxia is a disorder of gesture. Research that has investigated the development of praxis is discussed. Further, different types of gestural disorders displayed by children and different mechanisms that underlie developmental dyspraxia are compared to and contrasted with adult acquired apraxia. The impact of perceptual-motor, language, and cognitive impairments on children's gestural development and the possible associations between these developmental disorders and developmental dyspraxia are also examined. Also, the relationship among limb, orofacial, and verbal dyspraxia is discussed. Finally, problems that exist in the neuropsychological assessment of developmental dyspraxia are discussed and recommendations concerning what should be included in such an assessment are presented.",
"title": ""
},
{
"docid": "22285844f638715765d21bff139d1bb1",
"text": "The field of Terahertz (THz) radiation, electromagnetic energy, between 0.3 to 3 THz, has seen intense interest recently, because it combines some of the best properties of IR along with those of RF. For example, THz radiation can penetrate fabrics with less attenuation than IR, while its short wavelength maintains comparable imaging capabilities. We discuss major challenges in the field: designing systems and applications which fully exploit the unique properties of THz radiation. To illustrate, we present our reflective, radar-inspired THz imaging system and results, centered on biomedical burn imaging and skin hydration, and discuss challenges and ongoing research.",
"title": ""
},
{
"docid": "85d9b0ed2e9838811bf3b07bb31dbeb6",
"text": "In recent years, the medium which has negative index of refraction is widely researched. The medium has both the negative permittivity and the negative permeability. In this paper, we have researched the frequency range widening of negative permeability using split ring resonators.",
"title": ""
},
{
"docid": "0d2260653f223db82e2e713f211a2ba0",
"text": "Smartphone usage is a hot topic in pervasive computing due to their popularity and personal aspect. We present our initial results from analyzing how individual differences, such as gender and age, affect smartphone usage. The dataset comes from a large scale longitudinal study, the Menthal project. We select a sample of 30, 677 participants, from which 16, 147 are males and 14, 523 are females, with a median age of 21 years. These have been tracked for at least 28 days and they have submitted their demographic data through a questionnaire. The ongoing experiment has been started in January 2014 and we have used our own mobile data collection and analysis framework. Females use smartphones for longer periods than males, with a daily mean of 166.78 minutes vs. 154.26 minutes. Younger participants use their phones longer and usage is directed towards entertainment and social interactions through specialized apps. Older participants use it less and mainly for getting information or using it as a classic phone.",
"title": ""
},
{
"docid": "893942f986718d639aa46930124af679",
"text": "In this work we consider the problem of controlling a team of microaerial vehicles moving quickly through a three-dimensional environment while maintaining a tight formation. The formation is specified by a shape matrix that prescribes the relative separations and bearings between the robots. Each robot plans its trajectory independently based on its local information of other robot plans and estimates of states of other robots in the team to maintain the desired shape. We explore the interaction between nonlinear decentralized controllers, the fourth-order dynamics of the individual robots, the time delays in the network, and the effects of communication failures on system performance. An experimental evaluation of our approach on a team of quadrotors suggests that suitable performance is maintained as the formation motions become increasingly aggressive and as communication degrades.",
"title": ""
},
{
"docid": "62f5640954e5b731f82599fb52ea816f",
"text": "This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.",
"title": ""
},
{
"docid": "1c6a9910a51656a47a8599a98dba77bb",
"text": "In real life facial expressions show mixture of emotions. This paper proposes a novel expression descriptor based expression map that can efficiently represent pure, mixture and transition of facial expressions. The expression descriptor is the integration of optic flow and image gradient values and the descriptor value is accumulated in temporal scale. The expression map is realized using self-organizing map. We develop an objective scheme to find the percentage of different prototypical pure emotions (e.g., happiness, surprise, disgust etc.) that mix up to generate a real facial expression. Experimental results show that the expression map can be used as an effective classifier for facial expressions.",
"title": ""
},
{
"docid": "210052dbabdb5c48502079d75cdd6ce6",
"text": "Sketch It, Make It (SIMI) is a modeling tool that enables non-experts to design items for fabrication with laser cutters. SIMI recognizes rough, freehand input as a user iteratively edits a structured vector drawing. The tool combines the strengths of sketch-based interaction with the power of constraint-based modeling. Several interaction techniques are combined to present a coherent system that makes it easier to make precise designs for laser cutters.",
"title": ""
},
{
"docid": "426d3b0b74eacf4da771292abad06739",
"text": "Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem.",
"title": ""
},
{
"docid": "4357e361fd35bcbc5d6a7c195a87bad1",
"text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.",
"title": ""
},
{
"docid": "753f837e53a08a59392c30515481b503",
"text": "Light is a powerful zeitgeber that synchronizes our endogenous circadian pacemaker with the environment and has been previously described as an agent in improving cognitive performance. With that in mind, this study was designed to explore the influence of exposure to blue-enriched white light in the morning on the performance of adolescent students. 58 High school students were recruited from four classes in two schools. In each school, one classroom was equipped with blue-enriched white lighting while the classroom next door served as a control setting. The effects of classroom lighting on cognitive performance were assessed using standardized psychological tests. Results show beneficial effects of blue-enriched white light on students' performance. In comparison to standard lighting conditions, students showed faster cognitive processing speed and better concentration. The blue-enriched white lighting seems to influence very basic information processing primarily, as no effects on short-term encoding and retrieval of memories were found. & 2014 Elsevier GmbH. All rights reserved.",
"title": ""
},
{
"docid": "47b7ebc460ce1273941bdef5bc754d4a",
"text": "When people predict their future behavior, they tend to place too much weight on their current intentions, which produces an optimistic bias for behaviors associated with currently strong intentions. More realistic self-predictions require greater sensitivity to situational barriers, such as obstacles or competing demands, that may interfere with the translation of current intentions into future behavior. We consider three reasons why people may not adjust sufficiently for such barriers. First, self-predictions may focus exclusively on current intentions, ignoring potential barriers altogether. We test this possibility, in three studies, with manipulations that draw greater attention to barriers. Second, barriers may be discounted in the self-prediction process. We test this possibility by comparing prospective and retrospective ratings of the impact of barriers on the target behavior. Neither possibility was supported in these tests, or in a further test examining whether an optimally weighted statistical model could improve on the accuracy of self-predictions by placing greater weight on anticipated situational barriers. Instead, the evidence supports a third possibility: Even when they acknowledge that situational factors can affect the likelihood of carrying out an intended behavior, people do not adequately moderate the weight placed on their current intentions when predicting their future behavior.",
"title": ""
},
{
"docid": "9a397ca2a072d9b1f861f8a6770aa792",
"text": "Computational photography systems are becoming increasingly diverse, while computational resources---for example on mobile platforms---are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.",
"title": ""
},
{
"docid": "9c1687323661ccb6bf2151824edc4260",
"text": "In this work we present the design of a digitally controlled ring type oscillator in 0.5 μm CMOS technology for a low-cost and portable radio-frequency diathermy (RFD) device. The oscillator circuit is composed by a low frequency ring oscillator (LFRO), a voltage controlled ring oscillator (VCRO), and a logic control. The digital circuit generates an input signal for the LFO, which generates a voltage ramp that controls the oscillating output signal of the VCRO in the range of 500 KHz to 1 MHz. Simulation results show that the proposed circuit exhibits controllable output characteristics in the range of 500 KHz–1 MHz, with low power consumption and low phase noise, making it suitable for a portable RFD device.",
"title": ""
},
{
"docid": "47faebfa7d65ebf277e57436cf7c2ca4",
"text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable",
"title": ""
},
{
"docid": "04435e017e720c0ed6e5c0cd29f1b4fc",
"text": "Blobworld is a system for image retrieval based on finding coherent image regions which roughly correspond to objects. Each image is automatically segmented into regions (“blobs”) with associated color and texture descriptors. Querying is based on the attributes of one or two regions of interest, rather than a description of the entire image. In order to make large-scale retrieval feasible, we index the blob descriptions using a tree. Because indexing in the high-dimensional feature space is computationally prohibitive, we use a lower-rank approximation to the high-dimensional distance. Experiments show encouraging results for both querying and indexing.",
"title": ""
}
] |
scidocsrr
|
09ab79166d649d927ba1096fdb2fd5a6
|
Learning Knowledge Graphs for Question Answering through Conversational Dialog
|
[
{
"docid": "cf2fc7338a0a81e4c56440ec7c3c868e",
"text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.",
"title": ""
}
] |
[
{
"docid": "f7a69acbc2766e990cbd4f3c9b4124d1",
"text": "This paper aims at assisting empirical researchers benefit from recent advances in causal inference. The paper stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, and the conditional nature of causal claims inferred from nonexperimental studies. These emphases are illustrated through a brief survey of recent results, including the control of confounding, the assessment of causal effects, the interpretation of counterfactuals, and a symbiosis between counterfactual and graphical methods of analysis.",
"title": ""
},
{
"docid": "c6d3f20e9d535faab83fb34cec0fdb5b",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "f717225fa7518383e0db362e673b9af4",
"text": "The web has become the world's largest repository of knowledge. Web usage mining is the process of discovering knowledge from the interactions generated by the user in the form of access logs, cookies, and user sessions data. Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). Accurate web log mining results and efficient online navigational pattern prediction are undeniably crucial for tuning up websites and consequently helping in visitors’ retention. Like any other data mining task, web log mining starts with data cleaning and preparation and it ends up discovering some hidden knowledge which cannot be extracted using conventional methods. After applying web mining on web sessions we will get navigation patterns which are important for web users such that appropriate actions can be adopted. Due to huge data in web, discovery of patterns and there analysis for further improvement in website becomes a real time necessity. The main focus of this paper is using of hybrid prediction engine to classify users on the basis of discovered patterns from web logs. Our proposed framework is to overcome the problem arise due to using of any single algorithm, we will give results based on comparison of two different algorithms like Longest Common Sequence (LCS) algorithm and Frequent Pattern (Growth) algorithm. Keywords— Web Usage Mining, Navigation Pattern, Frequent Pattern (Growth) Algorithm. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "945cf1645df24629842c5e341c3822e7",
"text": "Cloud computing economically enables the paradigm of data service outsourcing. However, to protect data privacy, sensitive cloud data have to be encrypted before outsourced to the commercial public cloud, which makes effective data utilization service a very challenging task. Although traditional searchable encryption techniques allow users to securely search over encrypted data through keywords, they support only Boolean search and are not yet sufficient to meet the effective data utilization need that is inherently demanded by large number of users and huge amount of data files in cloud. In this paper, we define and solve the problem of secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by enabling search result relevance ranking instead of sending undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we explore the statistical measure approach, i.e., relevance score, from information retrieval to build a secure searchable index, and develop a one-to-many order-preserving mapping technique to properly protect those sensitive score information. The resulting design is able to facilitate efficient server-side ranking without losing keyword privacy. Thorough analysis shows that our proposed solution enjoys “as-strong-as-possible” security guarantee compared to previous searchable encryption schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.",
"title": ""
},
{
"docid": "046f15ecf1037477b10bfb4fa315c9c9",
"text": "With the rapid proliferation of camera-equipped smart devices (e.g., smartphones, pads, tablets), visible light communication (VLC) over screen-camera links emerges as a novel form of near-field communication. Such communication via smart devices is highly competitive for its user-friendliness, security, and infrastructure-less (i.e., no dependency on WiFi or cellular infrastructure). However, existing approaches mostly focus on improving the transmission speed and ignore the transmission reliability. Considering the interplay between the transmission speed and reliability towards effective end-to-end communication, in this paper, we aim to boost the throughput over screen-camera links by enhancing the transmission reliability. To this end, we propose RDCode, a robust dynamic barcode which enables a novel packet-frame-block structure. Based on the layered structure, we design different error correction schemes at three levels: intra-blocks, inter-blocks and inter-frames, in order to verify and recover the lost blocks and frames. Finally, we implement RDCode and experimentally show that RDCode reaches a high level of transmission reliability (e.g., reducing the error rate to 10%) and yields a at least doubled transmission rate, compared with the existing state-of-the-art approach COBRA.",
"title": ""
},
{
"docid": "733e5961428e5aad785926e389b9bd75",
"text": "OBJECTIVE\nPeer support can be defined as the process of giving and receiving nonprofessional, nonclinical assistance from individuals with similar conditions or circumstances to achieve long-term recovery from psychiatric, alcohol, and/or other drug-related problems. Recently, there has been a dramatic rise in the adoption of alternative forms of peer support services to assist recovery from substance use disorders; however, often peer support has not been separated out as a formalized intervention component and rigorously empirically tested, making it difficult to determine its effects. This article reports the results of a literature review that was undertaken to assess the effects of peer support groups, one aspect of peer support services, in the treatment of addiction.\n\n\nMETHODS\nThe authors of this article searched electronic databases of relevant peer-reviewed research literature including PubMed and MedLINE.\n\n\nRESULTS\nTen studies met our minimum inclusion criteria, including randomized controlled trials or pre-/post-data studies, adult participants, inclusion of group format, substance use-related, and US-conducted studies published in 1999 or later. Studies demonstrated associated benefits in the following areas: 1) substance use, 2) treatment engagement, 3) human immunodeficiency virus/hepatitis C virus risk behaviors, and 4) secondary substance-related behaviors such as craving and self-efficacy. Limitations were noted on the relative lack of rigorously tested empirical studies within the literature and inability to disentangle the effects of the group treatment that is often included as a component of other services.\n\n\nCONCLUSION\nPeer support groups included in addiction treatment shows much promise; however, the limited data relevant to this topic diminish the ability to draw definitive conclusions. More rigorous research is needed in this area to further expand on this important line of research.",
"title": ""
},
{
"docid": "d0a4bc15208b12b1647eb21e7ca9cc6c",
"text": "The investment in an automated fabric defect detection system is more than economical when reduction in labor cost and associated benefits are considered. The development of a fully automated web inspection system requires robust and efficient fabric defect detection algorithms. The inspection of real fabric defects is particularly challenging due to the large number of fabric defect classes, which are characterized by their vagueness and ambiguity. Numerous techniques have been developed to detect fabric defects and the purpose of this paper is to categorize and/or describe these algorithms. This paper attempts to present the first survey on fabric defect detection techniques presented in about 160 references. Categorization of fabric defect detection techniques is useful in evaluating the qualities of identified features. The characterization of real fabric surfaces using their structure and primitive set has not yet been successful. Therefore, on the basis of the nature of features from the fabric surfaces, the proposed approaches have been characterized into three categories; statistical, spectral and model-based. In order to evaluate the state-of-the-art, the limitations of several promising techniques are identified and performances are analyzed in the context of their demonstrated results and intended application. The conclusions from this paper also suggest that the combination of statistical, spectral and model-based approaches can give better results than any single approach, and is suggested for further research.",
"title": ""
},
{
"docid": "e72382020e2b15be32047da611ad078f",
"text": "This article describes the results of a case study that applies Neural Networkbased Optical Character Recognition (OCR) to scanned images of books printed between 1487 and 1870 by training the OCR engine OCRopus (Breuel et al. 2013) on the RIDGES herbal text corpus (Odebrecht et al. 2017, in press). Training specific OCR models was possible because the necessary ground truth is available as error-corrected diplomatic transcriptions. The OCR results have been evaluated for accuracy against the ground truth of unseen test sets. Character and word accuracies (percentage of correctly recognized items) for the resulting machine-readable texts of individual documents range from 94% to more than 99% (character level) and from 76% to 97% (word level). This includes the earliest printed books, which were thought to be inaccessible by OCR methods until recently. Furthermore, OCR models trained on one part of the corpus consisting of books with different printing dates and different typesets (mixed models) have been tested for their predictive power on the books from the other part containing yet other fonts, mostly yielding character accuracies well above 90%. It therefore seems possible to construct generalized models trained on a range of fonts that can be applied to a wide variety of historical printings still giving good results. A moderate postcorrection effort of some pages will then enable the training of individual models with even better accuracies. Using this method, diachronic corpora including early printings can be constructed much faster and cheaper than by manual transcription. The OCR methods reported here open up the possibility of transforming our printed textual cultural 1 ar X iv :1 60 8. 02 15 3v 2 [ cs .C L ] 1 F eb 2 01 7 Springmann & Lüdeling OCR of historical printings heritage into electronic text by largely automatic means, which is a prerequisite for the mass conversion of scanned books.",
"title": ""
},
{
"docid": "2ad8723c9fce1a6264672f41824963f8",
"text": "Psychologists have repeatedly shown that a single statistical factor--often called \"general intelligence\"--emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of \"collective intelligence\" exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This \"c factor\" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.",
"title": ""
},
{
"docid": "e7616fbe9853bf8e1c89441287baf30c",
"text": "The objective of the current study is to compare the use of a nasal continuous positive airway pressure (nCPAP) to a high-flow humidified nasal cannula (HFNC) in infants with acute bronchiolitis, who were admitted to a pediatric intensive care unit (PICU) during two consecutive seasons. We retrospectively reviewed the medical records of all infants admitted to a PICU at a tertiary care French hospital during the bronchiolitis seasons of 2010/11 and 2011/12. Infants admitted to the PICU, who required noninvasive respiratory support, were included. The first noninvasive respiratory support modality was nCPAP during the 2010/11 season, while HFNC was used during the 2011/2012 season. We compared the length of stay (LOS) in the PICU; the daily measure of PCO2 and pH; and the mean of the five higher values of heart rate (HR), respiratory rate (RR), FiO2, and SpO2 each day, during the first 5 days. Thirty-four children met the inclusion criteria: 19 during the first period (nCPAP group) and 15 during the second period (HFNC group). Parameters such as LOS in PICU and oxygenation were similar in the two groups. Oxygen weaning occurred during the same time for the two groups. There were no differences between the two groups for RR, HR, FiO2, and CO2 evolution. HFNC therapy failed in three patients, two of whom required invasive mechanical ventilation, versus one in the nCPAP group. Conclusion: We did not find a difference between HFNC and nCPAP in the management of severe bronchiolitis in our PICU. Larger prospective studies are required to confirm these findings.",
"title": ""
},
{
"docid": "0879f749188cbb88a8cefff60d0d4f6e",
"text": "Raw tomato contains a high level of lycopene, which has been reported to have many important health benefits. However, information on the changes of the lycopene content in tomato during cooking is limited. In this study, the lycopene content in raw and thermally processed (baked, microwaved, and fried) tomato slurries was investigated and analyzed using a high-performance liquid chromatography (HPLC) method. In the thermal stability study using a pure lycopene standard, 50% of lycopene was degraded at 100 ◦C after 60 min, 125 ◦C after 20 min, and 150 ◦C after less than 10 min. Only 64.1% and 51.5% lycopene was retained when the tomato slurry was baked at 177 ◦C and 218 ◦C for 15 min, respectively. At these temperatures, only 37.3% and 25.1% of lycopene was retained after baking for 45 min. In 1 min of the high power of microwave heating, 64.4% of lycopene still remained. However, more degradation of lycopene in the slurry was found in the frying study. Only 36.6% and 35.5% of lycopene was retained after frying at 145 and 165 ◦C for 1 min, respectively.",
"title": ""
},
{
"docid": "b088438d5e44d9fc2bd4156dbb708b1a",
"text": "Applying parallelism to constraint solving seems a promising approach and it has been done with varying degrees of success. Early attempts to parallelize constraint propagation, which constitutes the core of traditional interleaved propagation and search constraint solving, were hindered by its essentially sequential nature. Recently, parallelization efforts have focussed mainly on the search part of constraint solving, as well as on local-search based solving. Lately, a particular source of parallelism has become pervasive, in the guise of GPUs, able to run thousands of parallel threads, and they have naturally drawn the attention of researchers in parallel constraint solving. In this paper, we address challenges faced when using multiple devices for constraint solving, especially GPUs, such as deciding on the appropriate level of parallelism to employ, load balancing and inter-device communication, and present our current solutions.",
"title": ""
},
{
"docid": "5a25af5b9c51b7b1a7b36f0c9b121add",
"text": "BACKGROUND\nCircumcision is a common procedure, but regional and societal attitudes differ on whether there is a need for a male to be circumcised and, if so, at what age. This is an important issue for many parents, but also pediatricians, other doctors, policy makers, public health authorities, medical bodies, and males themselves.\n\n\nDISCUSSION\nWe show here that infancy is an optimal time for clinical circumcision because an infant's low mobility facilitates the use of local anesthesia, sutures are not required, healing is quick, cosmetic outcome is usually excellent, costs are minimal, and complications are uncommon. The benefits of infant circumcision include prevention of urinary tract infections (a cause of renal scarring), reduction in risk of inflammatory foreskin conditions such as balanoposthitis, foreskin injuries, phimosis and paraphimosis. When the boy later becomes sexually active he has substantial protection against risk of HIV and other viral sexually transmitted infections such as genital herpes and oncogenic human papillomavirus, as well as penile cancer. The risk of cervical cancer in his female partner(s) is also reduced. Circumcision in adolescence or adulthood may evoke a fear of pain, penile damage or reduced sexual pleasure, even though unfounded. Time off work or school will be needed, cost is much greater, as are risks of complications, healing is slower, and stitches or tissue glue must be used.\n\n\nSUMMARY\nInfant circumcision is safe, simple, convenient and cost-effective. The available evidence strongly supports infancy as the optimal time for circumcision.",
"title": ""
},
{
"docid": "5816f70a7f4d7d0beb6e0653db962df3",
"text": "Packaging appearance is extremely important in cigarette manufacturing. Typically, there are two types of cigarette packaging defects: (1) cigarette laying defects such as incorrect cigarette numbers and irregular layout; (2) tin paper handle defects such as folded paper handles. In this paper, an automated vision-based defect inspection system is designed for cigarettes packaged in tin containers. The first type of defects is inspected by counting the number of cigarettes in a tin container. First k-means clustering is performed to segment cigarette regions. After noise filtering, valid cigarette regions are identified by estimating individual cigarette area using linear regression. The k clustering centers and area estimation function are learned off-line on training images. The second kind of defect is detected by checking the segmented paper handle region. Experimental results on 500 test images demonstrate the effectiveness of the proposed inspection system. The proposed method also contributes to the general detection and classification system such as identifying mitosis in early diagnosis of cervical cancer.",
"title": ""
},
{
"docid": "dfca655ee52769c9c1d26e8c3f5b883f",
"text": "BACKGROUND\nDihydrocapsiate (DCT) is a natural safe food ingredient which is structurally related to capsaicin from chili pepper and is found in the non-pungent pepper strain, CH-19 Sweet. It has been shown to elicit the thermogenic effects of capsaicin but without its gastrointestinal side effects.\n\n\nMETHODS\nThe present study was designed to examine the effects of DCT on both adaptive thermogenesis as the result of caloric restriction with a high protein very low calorie diet (VLCD) and to determine whether DCT would increase post-prandial energy expenditure (PPEE) in response to a 400 kcal/60 g protein liquid test meal. Thirty-three subjects completed an outpatient very low calorie diet (800 kcal/day providing 120 g/day protein) over 4 weeks and were randomly assigned to receive either DCT capsules three times per day (3 mg or 9 mg) or placebo. At baseline and 4 weeks, fasting basal metabolic rate and PPEE were measured in a metabolic hood and fat free mass (FFM) determined using displacement plethysmography (BOD POD).\n\n\nRESULTS\nPPEE normalized to FFM was increased significantly in subjects receiving 9 mg/day DCT by comparison to placebo (p < 0.05), but decreases in resting metabolic rate were not affected. Respiratory quotient (RQ) increased by 0.04 in the placebo group (p < 0.05) at end of the 4 weeks, but did not change in groups receiving DCT.\n\n\nCONCLUSIONS\nThese data provide evidence for postprandial increases in thermogenesis and fat oxidation secondary to administration of dihydrocapsiate.\n\n\nTRIAL REGISTRATION\nclinicaltrial.govNCT01142687.",
"title": ""
},
{
"docid": "627f3c07a8ce5f0935ced97f685f44f4",
"text": "Click-through rate (CTR) prediction plays a central role in search advertising. One needs CTR estimates unbiased by positional effect in order for ad ranking, allocation, and pricing to be based upon ad relevance or quality in terms of click propensity. However, the observed click-through data has been confounded by positional bias, that is, users tend to click more on ads shown in higher positions than lower ones, regardless of the ad relevance. We describe a probabilistic factor model as a general principled approach to studying these exogenous and often overwhelming phenomena. The model is simple and linear in nature, while empirically justified by the advertising domain. Our experimental results with artificial and real-world sponsored search data show the soundness of the underlying model assumption, which in turn yields superior prediction accuracy.",
"title": ""
},
{
"docid": "be3296a4c18c8c102d9365d9ab092cf4",
"text": "Color barcode-based visible light communication (VLC) over screen-camera links has attracted great research interest in recent years due to its many desirable properties, including free of charge, free of interference, free of complex network configuration and well-controlled communication security. To achieve high-throughput barcode streaming, previous systems separately address design challenges such as image blur, imperfect frame synchronization and error correction etc., without being investigated as an interrelated whole. This does not fully exploit the capacity of color barcode streaming, and these solutions all have their own limitations from a practical perspective. This paper proposes RainBar, a new and improved color barcode-based visual communication system, which features a carefully-designed high-capacity barcode layout design to allow flexible frame synchronization and accurate code extraction. A progressive code locator detection and localization scheme and a robust color recognition scheme are proposed to enhance system robustness and hence the decoding rate under various working conditions. An extensive experimental study is presented to demonstrate the effectiveness and flexibility of RainBar. Results on Android smartphones show that our system achieves higher average throughput than previous systems, under various working environments.",
"title": ""
},
{
"docid": "45ea8497ccd9f63d519e40ef41938331",
"text": "The appearance of an object in an image encodes invaluable information about that object and the surrounding scene. Inferring object reflectance and scene illumination from an image would help us decode this information: reflectance can reveal important properties about the materials composing an object; the illumination can tell us, for instance, whether the scene is indoors or outdoors. Recovering reflectance and illumination from a single image in the real world, however, is a difficult task. Real scenes illuminate objects from every visible direction and real objects vary greatly in reflectance behavior. In addition, the image formation process introduces ambiguities, like color constancy, that make reversing the process ill-posed. To address this problem, we propose a Bayesian framework for joint reflectance and illumination inference in the real world. We develop a reflectance model and priors that precisely capture the space of real-world object reflectance and a flexible illumination model that can represent real-world illumination with priors that combat the deleterious effects of image formation. We analyze the performance of our approach on a set of synthetic data and demonstrate results on real-world scenes. These contributions enable reliable reflectance and illumination inference in the real world.",
"title": ""
},
{
"docid": "bb9f86e800e3f00bf7b34be85d846ff0",
"text": "This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.",
"title": ""
}
] |
scidocsrr
|
8cb7a552bede83ab496868c160895e99
|
Two people walk into a bar: dynamic multi-party social interaction with a robot agent
|
[
{
"docid": "1836291f68e18f8975803f6acbb302be",
"text": "We review key challenges of developing spoken dialog systems that can engage in interactions with one or multiple participants in relatively unconstrained environments. We outline a set of core competencies for open-world dialog, and describe three prototype systems. The systems are built on a common underlying conversational framework which integrates an array of predictive models and component technologies, including speech recognition, head and pose tracking, probabilistic models for scene analysis, multiparty engagement and turn taking, and inferences about user goals and activities. We discuss the current models and showcase their function by means of a sample recorded interaction, and we review results from an observational study of open-world, multiparty dialog in the wild.",
"title": ""
},
{
"docid": "8a56b4d4f69466aee0d5eff0c09cd514",
"text": "This paper explores how a robot’s physical presence affects human judgments of the robot as a social partner. For this experiment, participants collaborated on simple book-moving tasks with a humanoid robot that was either physically present or displayed via a live video feed. Multiple tasks individually examined the following aspects of social interaction: greetings, cooperation, trust, and personal space. Participants readily greeted and cooperated with the robot whether present physically or in live video display. However, participants were more likely both to fulfill an unusual request and to afford greater personal space to the robot when it was physically present, than when it was shown on live video. The same was true when the live video displayed robot’s gestures were augmented with disambiguating 3-D information. Questionnaire data support these behavioral findings and also show that participants had an overall more positive interaction with the physically present",
"title": ""
}
] |
[
{
"docid": "6eb8e1a391398788d9b4be294b8a70d1",
"text": "To improve software quality, researchers and practitioners have proposed static analysis tools for various purposes (e.g., detecting bugs, anomalies, and vulnerabilities). Although many such tools are powerful, they typically need complete programs where all the code names (e.g., class names, method names) are resolved. In many scenarios, researchers have to analyze partial programs in bug fixes (the revised source files can be viewed as a partial program), tutorials, and code search results. As a partial program is a subset of a complete program, many code names in partial programs are unknown. As a result, despite their syntactical correctness, existing complete-code tools cannot analyze partial programs, and existing partial-code tools are limited in both their number and analysis capability. Instead of proposing another tool for analyzing partial programs, we propose a general approach, called GRAPA, that boosts existing tools for complete programs to analyze partial programs. Our major insight is that after unknown code names are resolved, tools for complete programs can analyze partial programs with minor modifications. In particular, GRAPA locates Java archive files to resolve unknown code names, and resolves the remaining unknown code names from resolved code names. To illustrate GRAPA, we implement a tool that leverages the state-of-the-art tool, WALA, to analyze Java partial programs. We thus implemented the first tool that is able to build system dependency graphs for partial programs, complementing existing tools. We conduct an evaluation on 8,198 partial-code commits from four popular open source projects. Our results show that GRAPA fully resolved unknown code names for 98.5% bug fixes, with an accuracy of 96.1% in total. Furthermore, our results show the significance of GRAPA's internal techniques, which provides insights on how to integrate with more complete-code tools to analyze partial programs.",
"title": ""
},
{
"docid": "d62c2e7ca3040900d04f83ef4f99de4f",
"text": "Manual classification of brain tumor is time devastating and bestows ambiguous results. Automatic image classification is emergent thriving research area in medical field. In the proposed methodology, features are extracted from raw images which are then fed to ANFIS (Artificial neural fuzzy inference system).ANFIS being neuro-fuzzy system harness power of both hence it proves to be a sophisticated framework for multiobject classification. A comprehensive feature set and fuzzy rules are selected to classify an abnormal image to the corresponding tumor type. This proposed technique is fast in execution, efficient in classification and easy in implementation.",
"title": ""
},
{
"docid": "f1255742f2b1851457dd92ad97db7c8e",
"text": "Model transformations are frequently applied in business process modeling to bridge between languages on a different level of abstraction and formality. In this paper, we define a transformation between BPMN which is developed to enable business user to develop readily understandable graphical representations of business processes and YAWL, a formal workflow language that is able to capture all of the 20 workflow patterns reported. We illustrate the transformation challenges and present a suitable transformation algorithm. The benefit of the transformation is threefold. Firstly, it clarifies the semantics of BPMN via a mapping to YAWL. Secondly, the deployment of BPMN business process models is simplified. Thirdly, BPMN models can be analyzed with YAWL verification tools.",
"title": ""
},
{
"docid": "bd1ab7a30b4478a6320e5cad4698c2b4",
"text": "Corresponding Author: Jing Wang Boston University, Boston, MA, USA Email: [email protected] Abstract: Non-inferiority of a diagnostic test to the standard is a common issue in medical research. For instance, we may be interested in determining if a new diagnostic test is noninferior to the standard reference test because the new test might be inexpensive to the extent that some small inferior margin in sensitivity or specificity may be acceptable. Noninferiority trials are also found to be useful in clinical trials, such as image studies, where the data are collected in pairs. Conventional noninferiority trials for paired binary data are designed with a fixed sample size and no interim analysis is allowed. Adaptive design which allows for interim modifications of the trial becomes very popular in recent years and are widely used in clinical trials because of its efficiency. However, to our knowledge there is no adaptive design method available for noninferiority trial with paired binary data. In this study, we developed an adaptive design method for non-inferiority trials with paired binary data, which can also be used for superiority trials when the noninferiority margin is set to zero. We included a trial example and provided the SAS program for the design simulations.",
"title": ""
},
{
"docid": "0df2f2c6e7d9c16a767cb5630244ec35",
"text": "Alim Samat 1,2,*, Paolo Gamba 3, Jilili Abuduwaili 1,2, Sicong Liu 4 and Zelang Miao 5 1 State Key Laboratory of Desert and Oasis Ecology, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China; [email protected] 2 Chinese Academy of Sciences Research Center for Ecology and Environment of Central Asia, Urumqi 830011, China 3 Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy; [email protected] 4 College of Surveying and Geoinformatics, Tongji University, Shanghai 200092, China; [email protected] 5 Department of Land Surveying and Geo-Informatics, Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, China; [email protected] * Correspondence: [email protected]; Tel.: +86-991-788-5432",
"title": ""
},
{
"docid": "5e3d770390e03445c079c05a097fb891",
"text": "Electronic Commerce has increased the global reach of small and medium scale enterprises (SMEs); its acceptance as an IT infrastructure depends on the users’ conscious assessment of the influencing constructs as depicted in Technology Acceptance Model (TAM), Theory of Reasoned Action (TRA), Theory of Planned Behaviour (TPB), and Technology-Organization-Environment (T-O-E) model. The original TAM assumes the constructs of perceived usefulness (PU) and perceived ease of use (PEOU); TPB perceived behavioural control and subjective norms; and T-O-E firm’s size, consumer readiness, trading partners’ readiness, competitive pressure, and scope of business operation. This paper reviewed and synthesized the constructs of these models and proposed an improved TAM through T-O-E. The improved TAM and T-O-E integrated more constructs than the original TAM, T-O-E, TPB, and IDT, leading to eighteen propositions to promote and facilitate future research, and to guide explanation and prediction of IT adoption in an organized system. The integrated constructscompany mission, individual difference factors, perceived trust, and perceived service quality improve existing knowledge on EC acceptance and provide bases for informed decision(s).",
"title": ""
},
{
"docid": "816b2ed7d4b8ce3a8fc54e020bc2f712",
"text": "As a standardized communication protocol, OPC UA is the main focal point with regard to information exchange in the ongoing initiative Industrie 4.0. But there are also considerations to use it within the Internet of Things. The fact that currently no open reference implementation can be used in research for free represents a major problem in this context. The authors have the opinion that open source software can stabilize the ongoing theoretical work. Recent efforts to develop an open implementation for OPC UA were not able to meet the requirements of practical and industrial automation technology. This issue is addressed by the open62541 project which is presented in this article including an overview of its application fields and main research issues.",
"title": ""
},
{
"docid": "e247a8ae2a83150d83a8248ec96a4708",
"text": "Benjamin Beck’s definition of tool use has served the field of animal cognition well for over 25 years (Beck 1980, Animal Tool Behavior: the Use and Manufacture of Tools, New York, Garland STPM). This article proposes a new, more explanatory definition that accounts for tool use in terms of two complementary subcategories of behaviours: behaviours aimed at altering a target object by mechanical means and behaviours that mediate the flow of information between the tool user and the environment or other organisms in the environment. The conceptual foundation and implications of the new definition are contrasted with those of existing definitions, particularly Beck’s. The new definition is informally evaluated with respect to a set of scenarios that highlights differences from Beck’s definition as well as those of others in the literature.",
"title": ""
},
{
"docid": "aac3060a199b016e38be800c213c9dba",
"text": "In this paper, we investigate the use of electroencephalograhic signals for the purpose of recognizing unspoken speech. The term unspoken speech refers to the process in which a subject imagines speaking a given word without moving any articulatory muscle or producing any audible sound. Early work by Wester (Wester, 2006) presented results which were initially interpreted to be related to brain activity patterns due to the imagination of pronouncing words. However, subsequent investigations lead to the hypothesis that the good recognition performance might instead have resulted from temporal correlated artifacts in the brainwaves since the words were presented in blocks. In order to further investigate this hypothesis, we run a study with 21 subjects, recording 16 EEG channels using a 128 cap montage. The vocabulary consists of 5 words, each of which is repeated 20 times during a recording session in order to train our HMM-based classifier. The words are presented in blockwise, sequential, and random order. We show that the block mode yields an average recognition rate of 45.50%, but it drops to chance level for all other modes. Our experiments suggest that temporal correlated artifacts were recognized instead of words in block recordings and back the above-mentioned hypothesis.",
"title": ""
},
{
"docid": "566913d3a3d2e8fe24d6f5ff78440b94",
"text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.",
"title": ""
},
{
"docid": "a2adeb9448c699bbcbb10d02a87e87a5",
"text": "OBJECTIVE\nTo quantify the presence of health behavior theory constructs in iPhone apps targeting physical activity.\n\n\nMETHODS\nThis study used a content analysis of 127 apps from Apple's (App Store) Health & Fitness category. Coders downloaded the apps and then used an established theory-based instrument to rate each app's inclusion of theoretical constructs from prominent behavior change theories. Five common items were used to measure 20 theoretical constructs, for a total of 100 items. A theory score was calculated for each app. Multiple regression analysis was used to identify factors associated with higher theory scores.\n\n\nRESULTS\nApps were generally observed to be lacking in theoretical content. Theory scores ranged from 1 to 28 on a 100-point scale. The health belief model was the most prevalent theory, accounting for 32% of all constructs. Regression analyses indicated that higher priced apps and apps that addressed a broader activity spectrum were associated with higher total theory scores.\n\n\nCONCLUSION\nIt is not unexpected that apps contained only minimal theoretical content, given that app developers come from a variety of backgrounds and many are not trained in the application of health behavior theory. The relationship between price and theory score corroborates research indicating that higher quality apps are more expensive. There is an opportunity for health and behavior change experts to partner with app developers to incorporate behavior change theories into the development of apps. These future collaborations between health behavior change experts and app developers could foster apps superior in both theory and programming possibly resulting in better health outcomes.",
"title": ""
},
{
"docid": "ede1f31a32e59d29ee08c64c1a6ed5f7",
"text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.",
"title": ""
},
{
"docid": "122fe53f1e745480837a23b68e62540a",
"text": "The images degraded by fog suffer from poor contrast. In order to remove fog effect, a Contrast Limited Adaptive Histogram Equalization (CLAHE)-based method is presented in this paper. This method establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray-level. It can limit the noise while enhancing the image contrast. In our method, firstly, the original image is converted from RGB to HSI. Secondly, the intensity component of the HSI image is processed by CLAHE. Finally, the HSI image is converted back to RGB image. To evaluate the effectiveness of the proposed method, we experiment with a color image degraded by fog and apply the edge detection to the image. The results show that our method is effective in comparison with traditional methods. KeywordsCLAHE, fog, degraded, remove, color image, HSI, edge detection.",
"title": ""
},
{
"docid": "d7b77fae980b3bc26ffb4917d6d093c1",
"text": "This work presents a combination of a teach-and-replay visual navigation and Monte Carlo localization methods. It improves a reliable teach-and-replay navigation method by replacing its dependency on precise dead-reckoning by introducing Monte Carlo localization to determine robot position along the learned path. In consequence, the navigation method becomes robust to dead-reckoning errors, can be started from at any point in the map and can deal with the ‘kidnapped robot’ problem. Furthermore, the robot is localized with MCL only along the taught path, i.e. in one dimension, which does not require a high number of particles and significantly reduces the computational cost. Thus, the combination of MCL and teach-and-replay navigation mitigates the disadvantages of both methods. The method was tested using a P3-AT ground robot and a Parrot AR.Drone aerial robot over a long indoor corridor. Experiments show the validity of the approach and establish a solid base for continuing this work.",
"title": ""
},
{
"docid": "7d0a7073733f8393478be44d820e89ae",
"text": "Modeling user-item interaction patterns is an important task for personalized recommendations. Many recommender systems are based on the assumption that there exists a linear relationship between users and items while neglecting the intricacy and non-linearity of real-life historical interactions. In this paper, we propose a neural network based recommendation model (NeuRec) that untangles the complexity of user-item interactions and establish an integrated network to combine non-linear transformation with latent factors. We further design two variants of NeuRec: userbased NeuRec and item-based NeuRec, by focusing on different aspects of the interaction matrix. Extensive experiments on four real-world datasets demonstrated their superior performances on personalized ranking task.",
"title": ""
},
{
"docid": "76f68e1741a4022ce987d1b5da6fb0ff",
"text": "Transfer in reinforcement learning is a novel research area that focuses on the development of methods to transfer knowledge from a se t of source tasks to a target task. Whenever the tasks are similar, the transferred knowledge can be used by a learning algorithm to solve the target task and sign ifica tly improve its performance (e.g., by reducing the number of samples needed to achieve a nearly optimal performance). In this chapter we provide a formaliz ation of the general transfer problem, we identify the main settings which have b e n investigated so far, and we review the most important approaches to transfer in re inforcement learning.",
"title": ""
},
{
"docid": "888e55e684cd3fa4f09473672fc0a865",
"text": "Node-link diagrams are an effective and popular visualization approach for depicting hierarchical structures and for showing parent-child relationships. In this paper, we present the results of an eye tracking experiment investigating traditional, orthogonal, and radial node-link tree layouts as a piece of empirical basis for choosing between those layouts. Eye tracking was used to identify visual exploration behaviors of participants that were asked to solve a typical hierarchy exploration task by inspecting a static tree diagram: finding the least common ancestor of a given set of marked leaf nodes. To uncover exploration strategies, we examined fixation points, duration, and saccades of participants' gaze trajectories. For the non-radial diagrams, we additionally investigated the effect of diagram orientation by switching the position of the root node to each of the four main orientations. We also recorded and analyzed correctness of answers as well as completion times in addition to the eye movement data. We found out that traditional and orthogonal tree layouts significantly outperform radial tree layouts for the given task. Furthermore, by applying trajectory analysis techniques we uncovered that participants cross-checked their task solution more often in the radial than in the non-radial layouts.",
"title": ""
},
{
"docid": "c55c4fed2a96f9be539fec42de857d0c",
"text": "This paper presents a new inertial power generator for scavenging low-frequency nonperiodic vibrations called the Parametric Frequency-Increased Generator (PFIG). The PFIG utilizes three magnetically coupled mechanical structures to initiate high-frequency mechanical oscillations in an electromechanical transducer. The fixed internal displacement and dynamics of the PFIG allow it to operate more effectively than resonant generators when the ambient vibration amplitude is higher than the internal displacement limit of the device. The design, fabrication, and testing of an electromagnetic PFIG are discussed. The developed PFIG can generate a peak power of 163 μW and an average power of 13.6 μW from an input acceleration of 9.8 m/s2 at 10 Hz, and it can operate at frequencies up to 65 Hz, giving it an unprecedented operating bandwidth and versatility. The internal volume of the generator is 2.12 cm3 (3.75 cm3 including the casing). The harvester has a volume figure of merit of 0.068% and a bandwidth figure of merit of 0.375%. These values, although seemingly low, are the highest reported in the literature for a device of this size and operating in the difficult frequency range of ≤ 20 Hz.",
"title": ""
},
{
"docid": "32faa5a14922d44101281c783cf6defb",
"text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.",
"title": ""
},
{
"docid": "0fc051613dd8ac7b555a85f0ed2cccbc",
"text": "BACKGROUND\nAtezolizumab is a humanised antiprogrammed death-ligand 1 (PD-L1) monoclonal antibody that inhibits PD-L1 and programmed death-1 (PD-1) and PD-L1 and B7-1 interactions, reinvigorating anticancer immunity. We assessed its efficacy and safety versus docetaxel in previously treated patients with non-small-cell lung cancer.\n\n\nMETHODS\nWe did a randomised, open-label, phase 3 trial (OAK) in 194 academic or community oncology centres in 31 countries. We enrolled patients who had squamous or non-squamous non-small-cell lung cancer, were 18 years or older, had measurable disease per Response Evaluation Criteria in Solid Tumors, and had an Eastern Cooperative Oncology Group performance status of 0 or 1. Patients had received one to two previous cytotoxic chemotherapy regimens (one or more platinum based combination therapies) for stage IIIB or IV non-small-cell lung cancer. Patients with a history of autoimmune disease and those who had received previous treatments with docetaxel, CD137 agonists, anti-CTLA4, or therapies targeting the PD-L1 and PD-1 pathway were excluded. Patients were randomly assigned (1:1) to intravenously receive either atezolizumab 1200 mg or docetaxel 75 mg/m2 every 3 weeks by permuted block randomisation (block size of eight) via an interactive voice or web response system. Coprimary endpoints were overall survival in the intention-to-treat (ITT) and PD-L1-expression population TC1/2/3 or IC1/2/3 (≥1% PD-L1 on tumour cells or tumour-infiltrating immune cells). The primary efficacy analysis was done in the first 850 of 1225 enrolled patients. This study is registered with ClinicalTrials.gov, number NCT02008227.\n\n\nFINDINGS\nBetween March 11, 2014, and April 29, 2015, 1225 patients were recruited. In the primary population, 425 patients were randomly assigned to receive atezolizumab and 425 patients were assigned to receive docetaxel. Overall survival was significantly longer with atezolizumab in the ITT and PD-L1-expression populations. In the ITT population, overall survival was improved with atezolizumab compared with docetaxel (median overall survival was 13·8 months [95% CI 11·8-15·7] vs 9·6 months [8·6-11·2]; hazard ratio [HR] 0·73 [95% CI 0·62-0·87], p=0·0003). Overall survival in the TC1/2/3 or IC1/2/3 population was improved with atezolizumab (n=241) compared with docetaxel (n=222; median overall survival was 15·7 months [95% CI 12·6-18·0] with atezolizumab vs 10·3 months [8·8-12·0] with docetaxel; HR 0·74 [95% CI 0·58-0·93]; p=0·0102). Patients in the PD-L1 low or undetectable subgroup (TC0 and IC0) also had improved survival with atezolizumab (median overall survival 12·6 months vs 8·9 months; HR 0·75 [95% CI 0·59-0·96]). Overall survival improvement was similar in patients with squamous (HR 0·73 [95% CI 0·54-0·98]; n=112 in the atezolizumab group and n=110 in the docetaxel group) or non-squamous (0·73 [0·60-0·89]; n=313 and n=315) histology. Fewer patients had treatment-related grade 3 or 4 adverse events with atezolizumab (90 [15%] of 609 patients) versus docetaxel (247 [43%] of 578 patients). One treatment-related death from a respiratory tract infection was reported in the docetaxel group.\n\n\nINTERPRETATION\nTo our knowledge, OAK is the first randomised phase 3 study to report results of a PD-L1-targeted therapy, with atezolizumab treatment resulting in a clinically relevant improvement of overall survival versus docetaxel in previously treated non-small-cell lung cancer, regardless of PD-L1 expression or histology, with a favourable safety profile.\n\n\nFUNDING\nF. Hoffmann-La Roche Ltd, Genentech, Inc.",
"title": ""
}
] |
scidocsrr
|
2cc2e925a6c9e27a96631a977fe00740
|
Modular Architecture for StarCraft II with Deep Reinforcement Learning
|
[
{
"docid": "a9dfddc3812be19de67fc4ffbc2cad77",
"text": "Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents’ policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent’s action, while keeping the other agents’ actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.",
"title": ""
},
{
"docid": "d4a0b5558045245a55efbf9b71a84bc3",
"text": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.",
"title": ""
},
{
"docid": "e45e49fb299659e2e71f5c4eb825aff6",
"text": "We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledgebase. Knowledge is transferred by learning reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks, are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using two techniques: (1) a deep skill array and (2) skill distillation, our novel variation of policy distillation (Rusu et al. 2015) for learning skills. Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et al. 2015) in sub-domains of Minecraft.",
"title": ""
}
] |
[
{
"docid": "0a7558a172509707b33fcdfaafe0b732",
"text": "Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.",
"title": ""
},
{
"docid": "f941c1f5e5acd9865e210b738ff1745a",
"text": "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.",
"title": ""
},
{
"docid": "548f43f2193cffc6711d8a15c00e8c3d",
"text": "Dither signals provide an effective way to compensate for nonlinearities in control systems. The seminal works by Zames and Shneydor, and more recently, by Mossaheb, present rigorous tools for systematic design of dithered systems. Their results rely, however, on a Lipschitz assumption relating to nonlinearity, and thus, do not cover important applications with discontinuities. This paper presents initial results on how to analyze and design dither in nonsmooth systems. In particular, it is shown that a dithered relay feedback system can be approximated by a smoothed system. Guidelines are given for tuning the amplitude and the period time of the dither signal, in order to stabilize the nonsmooth system.",
"title": ""
},
{
"docid": "48b2d263a0f547c5c284c25a9e43828e",
"text": "This paper presents hierarchical topic models for integrating sentiment analysis with collaborative filtering. Our goal is to automatically predict future reviews to a given author from previous reviews. For this goal, we focus on differentiating author's preference, while previous sentiment analysis models process these review articles without this difference. Therefore, we propose a Latent Evaluation Topic model (LET) that infer each author's preference by introducing novel latent variables into author and his/her document layer. Because these variables distinguish the variety of words in each article by merging similar word distributions, LET incorporates the difference of writers' preferences into sentiment analysis. Consequently LET can determine the attitude of writers, and predict their reviews based on like-minded writers' reviews in the collaborative filtering approach. Experiments on review articles show that the proposed model can reduce the dimensionality of reviews to the low-dimensional set of these latent variables, and is a significant improvement over standard sentiment analysis models and collaborative filtering algorithms.",
"title": ""
},
{
"docid": "8a80b9306082f3cf373e2e638c0ecd0b",
"text": "We propose a maximal figure-of-merit (MFoM) learning framework to directly maximize mean average precision (MAP) which is a key performance metric in many multi-class classification tasks. Conventional classifiers based on support vector machines cannot be easily adopted to optimize the MAP metric. On the other hand, classifiers based on deep neural networks (DNNs) have recently been shown to deliver a great discrimination capability in automatic speech recognition and image classification as well. However, DNNs are usually optimized with the minimum cross entropy criterion. In contrast to most conventional classification methods, our proposed approach can be formulated to embed DNNs and MAP into the objective function to be optimized during training. The combination of the proposed maximum MAP (MMAP) technique and DNNs introduces nonlinearity to the linear discriminant function (LDF) in order to increase the flexibility and discriminant power of the original MFoM-trained LDF based classifiers. Tested on both automatic image annotation and audio event classification, the experimental results show consistent improvements of MAP on both datasets when compared with other state-of-the-art classifiers without using MMAP.",
"title": ""
},
{
"docid": "e7a86eeb576d4aca3b5e98dc53fcb52d",
"text": "Dictionary methods for cross-language information retrieval give performance below that for mono-lingual retrieval. Failure to translate multi-term phrases has km shown to be one of the factors responsible for the errors associated with dictionary methods. First, we study the importance of phrasaI translation for this approach. Second, we explore the role of phrases in query expansion via local context analysis and local feedback and show how they can be used to significantly reduce the error associated with automatic dictionary translation.",
"title": ""
},
{
"docid": "1530571213fb98e163cb3cf45cfe9cc6",
"text": "We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.",
"title": ""
},
{
"docid": "065417a0c2e82cbd33798de1be98042f",
"text": "Deep neural networks usually require large labeled datasets to construct accurate models; however, in many real-world scenarios, such as medical image segmentation, labeling data are a time-consuming and costly human (expert) intelligent task. Semi-supervised methods leverage this issue by making use of a small labeled dataset and a larger set of unlabeled data. In this paper, we present a flexible framework for semi-supervised learning that combines the power of supervised methods that learn feature representations using state-of-the-art deep convolutional neural networks with the deeply embedded clustering algorithm that assigns data points to clusters based on their probability distributions and feature representations learned by the networks. Our proposed semi-supervised learning algorithm based on deeply embedded clustering (SSLDEC) learns feature representations via iterations by alternatively using labeled and unlabeled data points and computing target distributions from predictions. During this iterative procedure, the algorithm uses labeled samples to keep the model consistent and tuned with labeling, as it simultaneously learns to improve feature representation and predictions. The SSLDEC requires a few hyper-parameters and thus does not need large labeled validation sets, which addresses one of the main limitations of many semi-supervised learning algorithms. It is also flexible and can be used with many state-of-the-art deep neural network configurations for image classification and segmentation tasks. To this end, we implemented and tested our approach on benchmark image classification tasks as well as in a challenging medical image segmentation scenario. In benchmark classification tasks, the SSLDEC outperformed several state-of-the-art semi-supervised learning methods, achieving 0.46% error on MNIST with 1000 labeled points and 4.43% error on SVHN with 500 labeled points. In the iso-intense infant brain MRI tissue segmentation task, we implemented SSLDEC on a 3D densely connected fully convolutional neural network where we achieved significant improvement over supervised-only training as well as a semi-supervised method based on pseudo-labeling. Our results show that the SSLDEC can be effectively used to reduce the need for costly expert annotations, enhancing applications, such as automatic medical image segmentation.",
"title": ""
},
{
"docid": "63a548ee4f8857823e4bcc7ccbc31d36",
"text": "The growing amounts of textual data require automatic methods for structuring relevant information so that it can be further processed by computers and systematically accessed by humans. The scenario dealt with in this dissertation is known as Knowledge Base Population (KBP), where relational information about entities is retrieved from a large text collection and stored in a database, structured according to a prespecified schema. Most of the research in this dissertation is placed in the context of the KBP benchmark of the Text Analysis Conference (TAC KBP), which provides a test-bed to examine all steps in a complex end-to-end relation extraction setting. In this dissertation a new state of the art for the TAC KBP benchmark was achieved by focussing on the following research problems: (1) The KBP task was broken down into a modular pipeline of sub-problems, and the most pressing issues were identified and quantified at all steps. (2) The quality of semi-automatically generated training data was increased by developing noise-reduction methods, decreasing the influence of false-positive training examples. (3) A focus was laid on fine-grained entity type modelling, entity expansion, entity matching and tagging, to maintain as much recall as possible on the relational argument level. (4) A new set of effective methods for generating training data, encoding features and training relational classifiers was developed and compared with previous state-of-the-art methods.",
"title": ""
},
{
"docid": "3b26f9c91ee0eb76768403fcb9579003",
"text": "The major task of network embedding is to learn low-dimensional vector representations of social-network nodes. It facilitates many analytical tasks such as link prediction and node clustering and thus has attracted increasing attention. The majority of existing embedding algorithms are designed for unsigned social networks. However, many social media networks have both positive and negative links, for which unsigned algorithms have little utility. Recent findings in signed network analysis suggest that negative links have distinct properties and added value over positive links. This brings about both challenges and opportunities for signed network embedding. In addition, user attributes, which encode properties and interests of users, provide complementary information to network structures and have the potential to improve signed network embedding. Therefore, in this paper, we study the novel problem of signed social network embedding with attributes. We propose a novel framework SNEA, which exploits the network structure and user attributes simultaneously for network representation learning. Experimental results on link prediction and node clustering with real-world datasets demonstrate the effectiveness of SNEA.",
"title": ""
},
{
"docid": "ea5357c6a936ae63f1660d1d3a9501e7",
"text": "DESCARTES’ REDUCTIONIST PRINCIPLE HAS HAD A PROfound influence on medicine. Similar to repairing a clock in which each broken part is fixed in order, investigators have attempted to discover causal relationships among key components of an individual and to treat those components accordingly. For example, if most of the morbidity in patients with diabetes is caused by high blood glucose levels, then control of those levels should return the system to normal and the patient’s health problems should disappear. However, in one recent study this strategy of more intensive glucose control resulted in increased risk of death. Likewise, chemotherapy often initially reduces tumor size but also produces severe adverse effects leading to other complications, including the promotion of secondary tumors. Most important, little evidence exists that more aggressive chemotherapies prolong life for many patients. In fact, chemotherapies may have overall negative effects for some patients. Most medical treatments make sense based on research of specific molecular pathways, so why do unexpected consequences occur after years of treatment? More simply, does the treatment that addresses a specific disease-related component harm the individual as a whole? To address these questions, the conflict between reductionism and complex systems must be analyzed. With increasing technological capabilities, these systems can be examined in continuously smaller components, from organs to cells, cells to chromosomes, and from chromosomes to genes. Paradoxically, the success of science also leads to blind spots in thinking as scientists become increasingly reductionist and determinist. The expectation is that as the resolution of the analysis increases, so too will the quantity and quality of information. High-resolution studies focusing on the building blocks of a biological system provide specific targets on which molecular cures can be based. While the DNA sequence of the human gene set is known, the functions of these genes are not understood in the context ofadynamicnetworkandtheresultant functional relationship tohumandiseases.Mutations inmanygenesareknowntocontribute to cancers in experimental systems, but the common mutationsthatactuallycausecancercannotyetbedetermined. Many therapies such as antibiotics, pacemakers, blood transfusions, and organ transplantation have worked well using classic approaches. In these cases, interventions were successful in treating a specific part of a complex system without triggering system chaos in many patients. However, even for these relatively safe interventions, unpredictable risk factors still exist. For every intervention that works well there are many others that do not, most of which involve complicated pathways and multiple levels of interaction. Even apparent major successes of the past have developed problems, such as the emergence and potential spread of super pathogens resistant to available antibiotic arrays. One common feature of a complex system is its emergent properties—thecollectiveresultofdistinctandinteractiveproperties generated by the interaction of individual components. When parts change, the behavior of a system can sometimes be predicted—but often cannot be if the system exists on the “edge of chaos.” For example, a disconnect exists between the status of the parts (such as tumor response) and the systems behavior(suchasoverall survivalof thepatient).Furthermore, nonlinear responsesof a complexsystemcanundergosudden massive and stochastic changes in response to what may seem minor perturbations. This may occur despite the same system displaying regular and predictable behavior under other conditions. For example, patients can be harmed by an uncommonadverseeffectofacommonlyusedtreatmentwhenthesystemdisplayschaoticbehaviorundersomecircumstances.This stochastic effect is what causes surprise. Given that any medical intervention is a stress to the system and that multiple system levels can respond differently, researchers must consider the stochastic response of the entire human system to drug therapyrather thanfocusingsolelyonthetargetedorganorcell oroneparticularmolecularpathwayorspecificgene.Thesame approachisnecessaryformonitoringtheclinicalsafetyofadrug. Other challenging questions await consideration. Once an entire systemisalteredbydiseaseprogression,howshould the system be restored following replacement of a defective part? If a system is altered, should it be brought back to the previous status, or is there a new standard defining a new stable system?Thedevelopmentofmanydiseasescantakeyears,during which time the system has adapted to function in the altered environment. These changes are not restricted to a few clinicallymonitored factorsbut can involve thewhole system, which now has adapted a new homeostasis with new dynamic interactions. Restoring only a few factors without considering the entire system can often result in further stress to the system, which might trigger a decline in system chaos. For many disease conditions resulting from years of adaptation, gradual",
"title": ""
},
{
"docid": "d7d808e948467a1bb241143233bf8ee2",
"text": "We discuss and predict the evolution of Simultaneous Localisation and Mapping (SLAM) into a general geometric and semantic ‘Spatial AI’ perception capability for intelligent embodied devices. A big gap remains between the visual perception performance that devices such as augmented reality eyewear or comsumer robots will require and what is possible within the constraints imposed by real products. Co-design of algorithms, processors and sensors will be needed. We explore the computational structure of current and future Spatial AI algorithms and consider this within the landscape of ongoing hardware developments.",
"title": ""
},
{
"docid": "a6959cc988542a077058e57a5d2c2eff",
"text": "A green and reliable method using supercritical fluid extraction (SFE) and molecular distillation (MD) was optimized for the separation and purification of standardized typical volatile components fraction (STVCF) from turmeric to solve the shortage of reference compounds in quality control (QC) of volatile components. A high quality essential oil with 76.0% typical components of turmeric was extracted by SFE. A sequential distillation strategy was performed by MD. The total recovery and purity of prepared STVCF were 97.3% and 90.3%, respectively. Additionally, a strategy, i.e., STVCF-based qualification and quantitative evaluation of major bioactive analytes by multiple calibrated components, was proposed to easily and effectively control the quality of turmeric. Compared with the individual calibration curve method, the STVCF-based quantification method was demonstrated to be credible and was effectively adapted for solving the shortage of reference volatile compounds and improving the QC of typical volatile components in turmeric, especially its functional products.",
"title": ""
},
{
"docid": "f4617250b5654a673219d779952db35f",
"text": "Convolutional neural network (CNN) models have achieved tremendous success in many visual detection and recognition tasks. Unfortunately, visual tracking, a fundamental computer vision problem, is not handled well using the existing CNN models, because most object trackers implemented with CNN do not effectively leverage temporal and contextual information among consecutive frames. Recurrent neural network (RNN) models, on the other hand, are often used to process text and voice data due to their ability to learn intrinsic representations of sequential and temporal data. Here, we propose a novel neural network tracking model that is capable of integrating information over time and tracking a selected target in video. It comprises three components: a CNN extracting best tracking features in each video frame, an RNN constructing video memory state, and a reinforcement learning (RL) agent making target location decisions. The tracking problem is formulated as a decision-making process, and our model can be trained with RL algorithms to learn good tracking policies that pay attention to continuous, inter-frame correlation and maximize tracking performance in the long run. We compare our model with an existing neural-network based tracking method and show that the proposed tracking approach works well in various scenarios by performing rigorous validation experiments on artificial video sequences with ground truth. To the best of our knowledge, our tracker is the first neural-network tracker that combines convolutional and recurrent networks with RL algorithms.",
"title": ""
},
{
"docid": "01a4b2be52e379db6ace7fa8ed501805",
"text": "The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.",
"title": ""
},
{
"docid": "71022e2197bfb99bd081928cf162f58a",
"text": "Ophthalmology and visual health research have received relatively limited attention from the personalized medicine community, but this trend is rapidly changing. Postgenomics technologies such as proteomics are being utilized to establish a baseline biological variation map of the human eye and related tissues. In this context, the choroid is the vascular layer situated between the outer sclera and the inner retina. The choroidal circulation serves the photoreceptors and retinal pigment epithelium (RPE). The RPE is a layer of cuboidal epithelial cells adjacent to the neurosensory retina and maintains the outer limit of the blood-retina barrier. Abnormal changes in choroid-RPE layers have been associated with age-related macular degeneration. We report here the proteome of the healthy human choroid-RPE complex, using reverse phase liquid chromatography and mass spectrometry-based proteomics. A total of 5309 nonredundant proteins were identified. Functional analysis of the identified proteins further pointed to molecular targets related to protein metabolism, regulation of nucleic acid metabolism, transport, cell growth, and/or maintenance and immune response. The top canonical pathways in which the choroid proteins participated were integrin signaling, mitochondrial dysfunction, regulation of eIF4 and p70S6K signaling, and clathrin-mediated endocytosis signaling. This study illustrates the largest number of proteins identified in human choroid-RPE complex to date and might serve as a valuable resource for future investigations and biomarker discovery in support of postgenomics ophthalmology and precision medicine.",
"title": ""
},
{
"docid": "86fdb9b60508f87c0210623879185c8c",
"text": "This paper proposes a novel Hierarchical Parsing Net (HPN) for semantic scene parsing. Unlike previous methods, which separately classify each object, HPN leverages global scene semantic information and the context among multiple objects to enhance scene parsing. On the one hand, HPN uses the global scene category to constrain the semantic consistency between the scene and each object. On the other hand, the context among all objects is also modeled to avoid incompatible object predictions. Specifically, HPN consists of four steps. In the first step, we extract scene and local appearance features. Based on these appearance features, the second step is to encode a contextual feature for each object, which models both the scene-object context (the context between the scene and each object) and the interobject context (the context among different objects). In the third step, we classify the global scene and then use the scene classification loss and a backpropagation algorithm to constrain the scene feature encoding. In the fourth step, a label map for scene parsing is generated from the local appearance and contextual features. Our model outperforms many state-of-the-art deep scene parsing networks on five scene parsing databases.",
"title": ""
},
{
"docid": "e685a22b6f7b20fb1289923e86e467c5",
"text": "Nowadays, with the growth in the use of search engines, the extension of spying programs and anti -terrorism prevention, several researches focused on text analysis. In this sense, lemmatization and stemming are two common requirements of these researches. They include reducing different grammatical forms of a word and bring them to a common base form. In what follows, we will discuss these treatment methods on arabic text, especially the Khoja Stemmer, show their limits and provide new tools to improve it.",
"title": ""
},
{
"docid": "8a73a42bed30751cbb6798398b81571d",
"text": "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5% compared to current weakly supervised methods. It also achieves 47% of the performance gain of verifying all images with only 3.2% images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io/CleanNetProject.",
"title": ""
}
] |
scidocsrr
|
eacb65f10b0211b0129209075e070a3f
|
A serious game model for cultural heritage
|
[
{
"docid": "49e3c33aa788d3d075c7569c6843065a",
"text": "Cultural heritage around the globe suffers from wars, natural disasters and human negligence. The importance of cultural heritage documentation is well recognized and there is an increasing pressure to document our heritage both nationally and internationally. This has alerted international organizations to the need for issuing guidelines describing the standards for documentation. Charters, resolutions and declarations by international organisations underline the importance of documentation of cultural heritage for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research. Important ones include the International Council on Monuments and Sites, ICOMOS (ICOMOS, 2005) and UNESCO, including the famous Venice Charter, The International Charter for the Conservation and Restoration of Monuments and Sites, 1964, (UNESCO, 2005).",
"title": ""
},
{
"docid": "c1e12a4feec78d480c8f0c02cdb9cb7d",
"text": "Although the Parthenon has stood on the Athenian Acropolis for nearly 2,500 years, its sculptural decorations have been scattered to museums around the world. Many of its sculptures have been damaged or lost. Fortunately, most of the decoration survives through drawings, descriptions, and casts. A component of our Parthenon Project has been to assemble digital models of the sculptures and virtually reunite them with the Parthenon. This sketch details our effort to digitally record the Parthenon sculpture collection in the Basel Skulpturhalle museum, which exhibits plaster casts of almost all of the existing pediments, metopes, and frieze. Our techniques have been designed to work as quickly as possible and at low cost.",
"title": ""
}
] |
[
{
"docid": "5d8bc135f10c1a9b741cc60ad7aae04f",
"text": "In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation (Bahdanau et al. (2014)). Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et al. (2015) on the Gigaword dataset without any additional tuning. We also propose additional extensions to the standard architecture, which we show contribute to further improvement in performance.",
"title": ""
},
{
"docid": "75e1e8e65bd5dcf426bf9f3ee7c666a5",
"text": "This paper offers a new, nonlinear model of informationseeking behavior, which contrasts with earlier stage models of information behavior and represents a potential cornerstone for a shift toward a new perspective for understanding user information behavior. The model is based on the findings of a study on interdisciplinary information-seeking behavior. The study followed a naturalistic inquiry approach using interviews of 45 academics. The interview results were inductively analyzed and an alternative framework for understanding information-seeking behavior was developed. This model illustrates three core processes and three levels of contextual interaction, each composed of several individual activities and attributes. These interact dynamically through time in a nonlinear manner. The behavioral patterns are analogous to an artist’s palette, in which activities remain available throughout the course of information-seeking. In viewing the processes in this way, neither start nor finish points are fixed, and each process may be repeated or lead to any other until either the query or context determine that information-seeking can end. The interactivity and shifts described by the model show information-seeking to be nonlinear, dynamic, holistic, and flowing. The paper offers four main implications of the model as it applies to existing theory and models, requirements for future research, and the development of information literacy curricula. Central to these implications is the creation of a new nonlinear perspective from which user information-seeking can be interpreted.",
"title": ""
},
{
"docid": "d3b24655e01cbb4f5d64006222825361",
"text": "A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior. & 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "808115043786372af3e3fb726cc3e191",
"text": "Scapy is a free and open source packet manipulation environment written in Python language. In this paper we present a Modbus extension to Scapy, and show how this environment can be used to build tools for security analysis of industrial network protocols. Our implementation can be extended to other industrial network protocols and can help security analysts to understand how these protocols work under attacks or adverse conditions.",
"title": ""
},
{
"docid": "a3345ad4a18be52b478d3e75cf05a371",
"text": "In the course of the routine use of NMR as an aid for organic chemistry, a day-to-day problem is the identification of signals deriving from common contaminants (water, solvents, stabilizers, oils) in less-than-analytically-pure samples. This data may be available in the literature, but the time involved in searching for it may be considerable. Another issue is the concentration dependence of chemical shifts (especially 1H); results obtained two or three decades ago usually refer to much more concentrated samples, and run at lower magnetic fields, than today’s practice. We therefore decided to collect 1H and 13C chemical shifts of what are, in our experience, the most popular “extra peaks” in a variety of commonly used NMR solvents, in the hope that this will be of assistance to the practicing chemist.",
"title": ""
},
{
"docid": "12f6f7e9350d436cc167e00d72b6e1b1",
"text": "This paper reviews the state of the art of a polyphase complex filter for RF front-end low-IF transceivers applications. We then propose a multi-stage polyphase filter design to generate a quadrature I/Q signal to achieve a wideband precision quadrature phase shift with a constant 90 ° phase difference for self-interference cancellation circuit for full duplex radio. The number of the stages determines the bandwidth requirement of the channel. An increase of 87% in bandwidth is attained when our design is implemented in multi-stage from 2 to an extended 6 stages. A 4-stage polyphase filter achieves 2.3 GHz bandwidth.",
"title": ""
},
{
"docid": "671eb73ad86525cb183e2b8dbfe09947",
"text": "We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent’s experience. Because this loss is highly flexible in its ability to take into account the agent’s history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG’s learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.",
"title": ""
},
{
"docid": "6021968dc39e13620e90c30d9c008d19",
"text": "In recent years, Deep Reinforcement Learning has made impressive advances in solving several important benchmark problems for sequential decision making. Many control applications use a generic multilayer perceptron (MLP) for non-vision parts of the policy network. In this work, we propose a new neural network architecture for the policy network representation that is simple yet effective. The proposed Structured Control Net (SCN) splits the generic MLP into two separate sub-modules: a nonlinear control module and a linear control module. Intuitively, the nonlinear control is for forward-looking and global control, while the linear control stabilizes the local dynamics around the residual of global control. We hypothesize that this will bring together the benefits of both linear and nonlinear policies: improve training sample efficiency, final episodic reward, and generalization of learned policy, while requiring a smaller network and being generally applicable to different training methods. We validated our hypothesis with competitive results on simulations from OpenAI MuJoCo, Roboschool, Atari, and a custom 2D urban driving environment, with various ablation and generalization tests, trained with multiple black-box and policy gradient training methods. The proposed architecture has the potential to improve upon broader control tasks by incorporating problem specific priors into the architecture. As a case study, we demonstrate much improved performance for locomotion tasks by emulating the biological central pattern generators (CPGs) as the nonlinear part of the architecture.",
"title": ""
},
{
"docid": "cd3bbec4c7f83c9fb553056b1b593bec",
"text": "We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of recurrent networks for this purpose, and have found that Long Short-term Memory networks provide the best results. We show that a new pitch representation called Circles of Thirds works as well as two other published representations for these tasks, yet it is more succinct and enables faster learning. Recurrent Neural Networks and Music Many researchers are familiar with feedforward neural networks consisting of 2 or more layers of processing units, each with weighted connections to the next layer. Each unit passes the sum of its weighted inputs through a nonlinear sigmoid function. Each layer’s outputs are fed forward through the network to the next layer, until the output layer is reached. Weights are initialized to small initial random values. Via the back-propagation algorithm (Rumelhart et al. 1986), outputs are compared to targets, and the errors are propagated back through the connection weights. Weights are updated by gradient descent. Through an iterative training procedure, examples (inputs) and targets are presented repeatedly; the network learns a nonlinear function of the inputs. It can then generalize and produce outputs for new examples. These networks have been explored by the computer music community for classifying chords (Laden and Keefe 1991) and other musical tasks (Todd and Loy 1991, Griffith and Todd 1999). A recurrent network uses feedback from one or more of its units as input in choosing the next output. This means that values generated by units at time step t-1, say y(t-1), are part of the inputs x(t) used in selecting the next set of outputs y(t). A network may be fully recurrent; that is all units are connected back to each other and to themselves. Or part of the network may be fed back in recurrent links. Todd (Todd 1991) uses a Jordan recurrent network (Jordan 1986) to reproduce classical songs and then to produce new songs. The outputs are recurrently fed back as inputs as shown in Figure 1. In addition, self-recurrence on the inputs provides a decaying history of these inputs. The weight update algorithm is back-propagation, using teacher forcing (Williams and Zipser 1988). With teacher forcing, the target outputs are presented to the recurrent inputs from the output units (instead of the actual outputs, which are not correct yet during training). Pitches (on output or input) are represented in a localized binary representation, with one bit for each of the 12 chromatic notes. More bits can be added for more octaves. C is represented as 100000000000. C# is 010000000000, D is 001000000000. Time is divided into 16th note increments. Note durations are determined by how many increments a pitch’s output unit is on (one). E.g. an eighth note lasts for two time increments. Rests occur when all outputs are off (zero). Figure 1. Jordan network, with outputs fed back to inputs. (Mozer 1994)’s CONCERT uses a backpropagationthrough-time (BPTT) recurrent network to learn various musical tasks and to learn melodies with harmonic accompaniment. Then, CONCERT can run in generation mode to compose new music. The BPTT algorithm (Williams and Zipser 1992, Werbos 1988, Campolucci 1998) can be used with a fully recurrent network where the outputs of all units are connected to the inputs of all units, including themselves. The network can include external inputs and optionally, may include a regular feedforward output network (see Figure 2). The BPTT weight updates are proportional to the gradient of the sum of errors over every time step in the interval between start time t0 and end time t1, assuming the error at time step t is affected by the outputs at all previous time steps, starting with t0. BPTT requires saving all inputs, states, and errors for all time steps, and updating the weights in a batch operation at the end, time t1. One sequence (each example) requires one batch weight update. Figure 2. A fully self-recurrent network with external inputs, and optional feedforward output attachment. If there is no output attachment, one or more recurrent units are designated as output units. CONCERT is a combination of BPTT with a layer of output units that are probabilistically interpreted, and a maximum likelihood training criterion (rather than a squared error criterion). There are two sets of outputs (and two sets of inputs), one set for pitch and the other for duration. One pass through the network corresponds to a note, rather than a slice of time. We present only the pitch representation here since that is our focus. Mozer uses a psychologically based representation of musical notes. Figure 3 shows the chromatic circle (CC) and the circle of fifths (CF), used with a linear octave value for CONCERT’s pitch representation. Ignoring octaves, we refer to the rest of the representation as CCCF. Six digits represent the position of a pitch on CC and six more its position on CF. C is represented as 000000 000000, C# as 000001 111110, D as 000011 111111, and so on. Mozer uses -1,1 rather than 0,1 because of implementation details. Figure 3. Chromatic Circle on Left, Circle of Fifths on Right. Pitch position on each circle determines its representation. For chords, CONCERT uses the overlapping subharmonics representation of (Laden and Keefe, 1991). Each chord tone starts in Todd’s binary representation, but 5 harmonics (integer multiples of its frequency) are added. C3 is now C3, C4, G4, C5, E5 requiring a 3 octave representation. Because the 7th of the chord does not overlap with the triad harmonics, Laden and Keefe use triads only. C major triad C3, E3, G3, with harmonics, is C3, C4, G4, C5, E5, E3, E4, B4, E5, G#5, G3, G4, D4, G5, B5. The triad pitches and harmonics give an overlapping representation. Each overlapping pitch adds 1 to its corresponding input. CONCERT excludes octaves, leaving 12 highly overlapping chord inputs, plus an input that is positive when certain key-dependent chords appear, and learns waltzes over a harmonic chord structure. Eck and Schmidhuber (2002) use Long Short-term Memory (LSTM) recurrent networks to learn and compose blues music (Hochreiter and Schmidhuber 1997, and see Gers et al., 2000 for succinct pseudo-code for the algorithm). An LSTM network consists of input units, output units, and a set of memory blocks, each of which includes one or more memory cells. Blocks are connected to each other recurrently. Figure 4 shows an LSTM network on the left, and the contents of one memory block (this one with one cell) on the right. There may also be a direct connection from external inputs to the output units. This is the configuration found in Gers et al., and the one we use in our experiments. Eck and Schmidhuber also add recurrent connections from output units to memory blocks. Each block contains one or more memory cells that are self-recurrent. All other units in the block gate the inputs, outputs, and the memory cell itself. A memory cell can “cache” errors and release them for weight updates much later in time. The gates can learn to delay a block’s outputs, to reset the memory cells, and to inhibit inputs from reaching the cell or to allow inputs in. Figure 4. An LSTM network on the left and a one-cell memory block on the right, with input, forget, and output gates. Black squares on gate connections show that the gates can control whether information is passed to the cell, from the cell, or even within the cell. Weight updates are based on gradient descent, with multiplicative gradient calculations for gates, and approximations from the truncated BPTT (Williams and Peng 1990) and Real-Time Recurrent Learning (RTRL) (Robinson and Fallside 1987) algorithms. LSTM networks are able to perform counting tasks in time-series. Eck and Schmidhuber’s model of blues music is a 12-bar chord sequence over which music is composed/improvised. They successfully trained an LSTM network to learn a sequence of blues chords, with varying durations. Splitting time into 8th note increments, each chord’s duration is either 8 or 4 time steps (whole or half durations). Chords are sets of 3 or 4 tones (triads or triads plus sevenths), represented in a 12-bit localized binary representation with values of 1 for a chord pitch, and 0 for a non-chord pitch. Chords are inverted to fit in 1 octave. For example, C7 is represented as 100010010010 (C,E,G,B-flat), and F7 is 100101000100 (F,A,C,E-flat inverted to C,E-flat,F,A). The network has 4 memory blocks, each containing 2 cells. The outputs are considered probabilities of whether the corresponding note is on or off. The goal is to obtain an output of more that .5 for each note that should be on in a particular chord, with all other outputs below .5. Eck and Schmidhuber’s work includes learning melody and chords with two LSTM networks containing 4 blocks each. Connections are made from the chord network to the melody network, but not vice versa. The authors composed short 1-bar melodies over each of the 12 possible bars. The network is trained on concatenations of the short melodies over the 12-bar blues chord sequence. The melody network is trained until the chords network has learned according to the criterion. In music generation mode, the network can generate new melodies using this training. In a system called CHIME (Franklin 2000, 2001), we first train a Jordan recurrent network (Figure 1) to produce 3 Sonny Rollins jazz/blues melodies. The current chord and index number of the song are non-recurrent inputs to the network. Chords are represented as sets of 4 note values of 1 in a 12-note input layer, with non-chord note inputs set to 0 just as in Eck and Schmidhuber’s chord representation. Chords are also inverted to fit within one octave. 24 (2 octaves) of the outputs are notes, and the 25th is a rest. Of these 25, the unit with the largest value ",
"title": ""
},
{
"docid": "092bf4ee1626553206ee9b434cda957b",
"text": ".......................................................................................................... 3 Introduction ...................................................................................................... 4 Methods........................................................................................................... 7 Procedure ..................................................................................................... 7 Inclusion and exclusion criteria ..................................................................... 8 Data extraction and quality assessment ....................................................... 8 Results ............................................................................................................ 9 Included studies ........................................................................................... 9 Quality of included articles .......................................................................... 13 Excluded studies ........................................................................................ 15 Fig. 1 CONSORT 2010 Flow Diagram ....................................................... 16 Table 1: Primary studies ............................................................................. 17 Table2: Secondary studies ......................................................................... 18 Discussion ..................................................................................................... 19 Conclusion ..................................................................................................... 22 Acknowledgements ....................................................................................... 22 References .................................................................................................... 23 Appendix ....................................................................................................... 32",
"title": ""
},
{
"docid": "5c7678fae587ef784b4327d545a73a3e",
"text": "The vision of Future Internet based on standard communication protocols considers the merging of computer networks, Internet of Things (IoT), Internet of People (IoP), Internet of Energy (IoE), Internet of Media (IoM), and Internet of Services (IoS), into a common global IT platform of seamless networks and networked “smart things/objects”. However, with the widespread deployment of networked, intelligent sensor technologies, an Internet of Things (IoT) is steadily evolving, much like the Internet decades ago. In the future, hundreds of billions of smart sensors and devices will interact with one another without human intervention, on a Machine-to-Machine (M2M) basis. They will generate an enormous amount of data at an unprecedented scale and resolution, providing humans with information and control of events and objects even in remote physical environments. This paper will provide an overview of performance evaluation, challenges and opportunities of IOT results for machine learning presented by this new paradigm.",
"title": ""
},
{
"docid": "c26919afa32708786ae7f96b88883ed9",
"text": "A Privacy Enhancement Technology (PET) is an application or a mechanism which allows users to protect the privacy of their personally identifiable information. Early PETs were about enabling anonymous mailing and anonymous browsing, but lately there have been active research and development efforts in many other problem domains. This paper describes the first pattern language for developing privacy enhancement technologies. Currently, it contains 12 patterns. These privacy patterns are not limited to a specific problem domain; they can be applied to design anonymity systems for various types of online communication, online data sharing, location monitoring, voting and electronic cash management. The pattern language guides a developer when he or she is designing a PET for an existing problem, or innovating a solution for a new problem.",
"title": ""
},
{
"docid": "c6058966ef994d7b447f47d41d7fff33",
"text": "The advancement in computer technology has encouraged the researchers to develop software for assisting doctors in making decision without consulting the specialists directly. The software development exploits the potential of human intelligence such as reasoning, making decision, learning (by experiencing) and many others. Artificial intelligence is not a new concept, yet it has been accepted as a new technology in computer science. It has been applied in many areas such as education, business, medical and manufacturing. This paper explores the potential of artificial intelligence techniques particularly for web-based medical applications. In addition, a model for web-based medical diagnosis and prediction is",
"title": ""
},
{
"docid": "753b167933f5dd92c4b8021f6b448350",
"text": "The advent of social media and microblogging platforms has radically changed the way we consume information and form opinions. In this paper, we explore the anatomy of the information space on Facebook by characterizing on a global scale the news consumption patterns of 376 million users over a time span of 6 y (January 2010 to December 2015). We find that users tend to focus on a limited set of pages, producing a sharp community structure among news outlets. We also find that the preferences of users and news providers differ. By tracking how Facebook pages \"like\" each other and examining their geolocation, we find that news providers are more geographically confined than users. We devise a simple model of selective exposure that reproduces the observed connectivity patterns.",
"title": ""
},
{
"docid": "b0a0ad5f90d849696e3431373db6b4a5",
"text": "A comparative study of the structure of the flower in three species of Robinia L., R. pseudoacacia, R. × ambigua, and R. neomexicana, was carried out. The widely naturalized R. pseudoacacia, as compared to the two other species, has the smallest sizes of flower organs at all stages of development. Qualitative traits that describe each phase of the flower development were identified. A set of microscopic morphological traits of the flower (both quantitative and qualitative) was analyzed. Additional taxonomic traits were identified: shape of anthers, size and shape of pollen grains, and the extent of pollen fertility.",
"title": ""
},
{
"docid": "da72f2990b3e21c45a92f7b54be1d202",
"text": "A low-profile, high-gain, and wideband metasurface (MS)-based filtering antenna with high selectivity is investigated in this communication. The planar MS consists of nonuniform metallic patch cells, and it is fed by two separated microstrip-coupled slots from the bottom. The separation between the two slots together with a shorting via is used to provide good filtering performance in the lower stopband, whereas the MS is elaborately designed to provide a sharp roll-off rate at upper band edge for the filtering function. The MS also simultaneously works as a high-efficient radiator, enhancing the impedance bandwidth and antenna gain of the feeding slots. To verify the design, a prototype operating at 5 GHz has been fabricated and measured. The reflection coefficient, radiation pattern, antenna gain, and efficiency are studied, and reasonable agreement between the measured and simulated results is observed. The prototype with dimensions of 1.3 λ0 × 1.3 λ0 × 0.06 λ0 has a 10-dB impedance bandwidth of 28.4%, an average gain of 8.2 dBi within passband, and an out-of-band suppression level of more than 20 dB within a very wide stop-band.",
"title": ""
},
{
"docid": "157c084aa6622c74449f248f98314051",
"text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.",
"title": ""
},
{
"docid": "5912dda99171351acc25971d3c901624",
"text": "New cultivars with very erect leaves, which increase light capture for photosynthesis and nitrogen storage for grain filling, may have increased grain yields. Here we show that the erect leaf phenotype of a rice brassinosteroid–deficient mutant, osdwarf4-1, is associated with enhanced grain yields under conditions of dense planting, even without extra fertilizer. Molecular and biochemical studies reveal that two different cytochrome P450s, CYP90B2/OsDWARF4 and CYP724B1/D11, function redundantly in C-22 hydroxylation, the rate-limiting step of brassinosteroid biosynthesis. Therefore, despite the central role of brassinosteroids in plant growth and development, mutation of OsDWARF4 alone causes only limited defects in brassinosteroid biosynthesis and plant morphology. These results suggest that regulated genetic modulation of brassinosteroid biosynthesis can improve crops without the negative environmental effects of fertilizers.",
"title": ""
},
{
"docid": "7f3bccab6d6043d3dedc464b195df084",
"text": "This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent processes that include several distinct phases. In essence, a GBN is a model that combines several Bayesian networks (BNs) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. In this paper we also present an algorithm for semi-automatic learning of GBNs. We use the algorithm to learn GBNs that output buy and sell decisions for use in algorithmic trading systems. We show how the learnt GBNs can substantially lower risk towards invested capital, while they at the same time generate similar or better rewards, compared to the benchmark investment strategy buy-and-hold. We also explore some differences and similarities between GBNs and other related formalisms.",
"title": ""
}
] |
scidocsrr
|
a4b6ff84625bc57c265b825f712ba42b
|
Real-Time Face Detection and Motion Analysis With Application in “Liveness” Assessment
|
[
{
"docid": "6f56d10f90b1b3ba0c1700fa06c9199e",
"text": "Finding human faces automatically in an image is a dif cult yet important rst step to a fully automatic face recognition system This paper presents an example based learning approach for locating unoccluded frontal views of human faces in complex scenes The technique represents the space of human faces by means of a few view based face and non face pattern prototypes At each image location a value distance measure is com puted between the local image pattern and each prototype A trained classi er determines based on the set of dis tance measurements whether a human face exists at the current image location We show empirically that our distance metric is critical for the success of our system",
"title": ""
}
] |
[
{
"docid": "82c8a692e3b39e58bd73997b2e922c2c",
"text": "The traditional approaches to building survivable systems assume a framework of absolute trust requiring a provably impenetrable and incorruptible Trusted Computing Base (TCB). Unfortunately, we don’t have TCB’s, and experience suggests that we never will. We must instead concentrate on software systems that can provide useful services even when computational resource are compromised. Such a system will 1) Estimate the degree to which a computational resources may be trusted using models of possible compromises. 2) Recognize that a resource is compromised by relying on a system for long term monitoring and analysis of the computational infrastructure. 3) Engage in self-monitoring, diagnosis and adaptation to best achieve its purposes within the available infrastructure. All this, in turn, depends on the ability of the application, monitoring, and control systems to engage in rational decision making about what resources they should use in order to achieve the best ratio of expected benefit to risk.",
"title": ""
},
{
"docid": "9e1c3d4a8bbe211b85b19b38e39db28e",
"text": "This paper presents a novel context-based scene recognition method that enables mobile robots to recognize previously observed topological places in known environments or categorize previously unseen places in new environments. We achieve this by introducing the Histogram of Oriented Uniform Patterns (HOUP), which provides strong discriminative power for place recognition, while offering a significant level of generalization for place categorization. HOUP descriptors are used for image representation within a subdivision framework, where the size and location of sub-regions are determined using an informative feature selection method based on kernel alignment. Further improvement is achieved by developing a similarity measure that accounts for perceptual aliasing to eliminate the effect of indistinctive but visually similar regions that are frequently present in outdoor and indoor scenes. An extensive set of experiments reveals the excellent performance of our method on challenging categorization and recognition tasks. Specifically, our proposed method outperforms the current state of the art on two place categorization datasets with 15 and 5 place categories, and two topological place recognition datasets, with 5 and 27 places.",
"title": ""
},
{
"docid": "58d629b3ac6bd731cd45126ce3ed8494",
"text": "The Support Vector Machine (SVM) is a common machine learning tool that is widely used because of its high classification accuracy. Implementing SVM for embedded real-time applications is very challenging because of the intensive computations required. This increases the attractiveness of implementing SVM on hardware platforms for reaching high performance computing with low cost and power consumption. This paper provides the first comprehensive survey of current literature (2010-2015) of different hardware implementations of SVM classifier on Field-Programmable Gate Array (FPGA). A classification of existing techniques is presented, along with a critical analysis and discussion. A challenging trade-off between meeting embedded real-time systems constraints and high classification accuracy has been observed. Finally, some key future research directions are suggested.",
"title": ""
},
{
"docid": "c8c57c89f5bd92c726373f9cf77726e0",
"text": "Research of named entity recognition (NER) on electrical medical records (EMRs) focuses on verifying whether methods to NER in traditional texts are effective for that in EMRs, and there is no model proposed for enhancing performance of NER via deep learning from the perspective of multiclass classification. In this paper, we annotate a real EMR corpus to accomplish the model training and evaluation. And, then, we present a Convolutional Neural Network (CNN) based multiclass classification method for mining named entities from EMRs. The method consists of two phases. In the phase 1, EMRs are pre-processed for representing samples with word embedding. In the phase 2, the method is built by segmenting training data into many subsets and training a CNN binary classification model on each of subset. Experimental results showed the effectiveness of our method.",
"title": ""
},
{
"docid": "2d6225b20cf13d2974ce78877642a2f7",
"text": "Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.",
"title": ""
},
{
"docid": "40e38080e12b2d73836fcb1cf79db033",
"text": "The research in statistical parametric speech synthesis is towards improving naturalness and intelligibility. In this work, the deviation in spectral tilt of the natural and synthesized speech is analyzed and observed a large gap between the two. Furthermore, the same is analyzed for different classes of sounds, namely low-vowels, mid-vowels, high-vowels, semi-vowels, nasals, and found to be varying with category of sound units. Based on variation, a novel method for spectral tilt enhancement is proposed, where the amount of enhancement introduced is different for different classes of sound units. The proposed method yields improvement in terms of intelligibility, naturalness, and speaker similarity of the synthesized speech.",
"title": ""
},
{
"docid": "0ea239ac71e65397d0713fe8c340f67c",
"text": "Mutations in leucine-rich repeat kinase 2 (LRRK2) are a common cause of familial and sporadic Parkinson's disease (PD). Elevated LRRK2 kinase activity and neurodegeneration are linked, but the phosphosubstrate that connects LRRK2 kinase activity to neurodegeneration is not known. Here, we show that ribosomal protein s15 is a key pathogenic LRRK2 substrate in Drosophila and human neuron PD models. Phosphodeficient s15 carrying a threonine 136 to alanine substitution rescues dopamine neuron degeneration and age-related locomotor deficits in G2019S LRRK2 transgenic Drosophila and substantially reduces G2019S LRRK2-mediated neurite loss and cell death in human dopamine and cortical neurons. Remarkably, pathogenic LRRK2 stimulates both cap-dependent and cap-independent mRNA translation and induces a bulk increase in protein synthesis in Drosophila, which can be prevented by phosphodeficient T136A s15. These results reveal a novel mechanism of PD pathogenesis linked to elevated LRRK2 kinase activity and aberrant protein synthesis in vivo.",
"title": ""
},
{
"docid": "5caa0646c0d5b1a2a0c799e048b5557a",
"text": "The goal of this research is to find the efficient and most widely used cryptographic algorithms form the history, investigating one of its merits and demerits which have not been modified so far. Perception of cryptography, its techniques such as transposition & substitution and Steganography were discussed. Our main focus is on the Playfair Cipher, its advantages and disadvantages. Finally, we have proposed a few methods to enhance the playfair cipher for more secure and efficient cryptography.",
"title": ""
},
{
"docid": "cf374e1d1fa165edaf0b29749f32789c",
"text": "Photovoltaic (PV) system performance extremely depends on local insolation and temperature conditions. Under partial shading, P-I characteristics of PV systems are complicated and may have multiple local maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily fail to track global maxima and may be trapped in local maxima under partial shading; this can be one of main causes for reduced energy yield for many PV systems. In order to solve this problem, this paper proposes a novel Maximum Power Point tracking algorithm based on Differential Evolution (DE) that is capable of tracking global MPP under partial shaded conditions. The ability of proposed algorithm and its excellent performances are evaluated with conventional and popular algorithm by means of simulation. The proposed algorithm works in conjunction with a Boost (step up) DC-DC converter to track the global peak. Moreover, this paper includes a MATLAB-based modeling and simulation scheme suitable for photovoltaic characteristics under partial shading.",
"title": ""
},
{
"docid": "2eba092d19cc8fb35994e045f826e950",
"text": "Deep neural networks have proven to be particularly eective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their eectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. is article represents the rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the eld.",
"title": ""
},
{
"docid": "b42f4d645e2a7e24df676a933f414a6c",
"text": "Epilepsy is a common neurological condition which affects the central nervous system that causes people to have a seizure and can be assessed by electroencephalogram (EEG). Electroencephalography (EEG) signals reflect two types of paroxysmal activity: ictal activity and interictal paroxystic events (IPE). The relationship between IPE and ictal activity is an essential and recurrent question in epileptology. The spike detection in EEG is a difficult problem. Many methods have been developed to detect the IPE in the literature. In this paper we propose three methods to detect the spike in real EEG signal: Page Hinkley test, smoothed nonlinear energy operator (SNEO) and fractal dimension. Before using these methods, we filter the signal. The Singular Spectrum Analysis (SSA) filter is used to remove the noise in an EEG signal.",
"title": ""
},
{
"docid": "fa7cbe54e7fdc2ef373cf4b966181eba",
"text": "Fingerprint enhancement is a critical step in fingerprint recognition systems. There are many existing contact-based fingerprint image enhancement methods and they have their own strengths and weaknesses. However, image enhancement approaches that can be used for contactless fingerprints are rarely considered and the number of such approaches is limited. Furthermore, the performance of existing contact-based fingerprint enhancement methods on the contactless fingerprint samples are unsatisfactory. Therefore, in this paper we propose an improved 3-step fingerprint image quality enhancement approach, which can be used for enhancing contactless fingerprint samples. The evaluation results show that, the proposed enhancement method significantly increases the number of detected minutiae, and improves the performance of fingerprint recognition system by reducing 7% and 15% EER compared to existing methods, respectively.",
"title": ""
},
{
"docid": "310036a45a95679a612cc9a60e44e2e0",
"text": "A broadband single layer, dual circularly polarized (CP) reflectarrays with linearly polarized feed is introduced in this paper. To reduce the electrical interference between the two orthogonal polarizations of the CP element, a novel subwavelength multiresonance element with a Jerusalem cross and an open loop is proposed, which presents a broader bandwidth and phase range excessing 360° simultaneously. By tuning the x- and y-axis dimensions of the proposed element, an optimization technique is used to minimize the phase errors on both orthogonal components. Then, a single-layer offset-fed 20 × 20-element dual-CP reflectarray has been designed and fabricated. The measured results show that the 1-dB gain and 3-dB axial ratio (AR) bandwidths of the dual-CP reflectarray can reach 12.5% and 50%, respectively, which shows a significant improvement in gain and AR bandwidths as compared to reflectarrays with conventional λ/2 cross-dipole elements.",
"title": ""
},
{
"docid": "7b7f1f029e13008b1578c87c7319b645",
"text": "This paper presents the design and manufacturing processes of a new piezoactuated XY stage with integrated parallel, decoupled, and stacked kinematics structure for micro-/nanopositioning application. The flexure-based XY stage is composed of two decoupled prismatic-prismatic limbs which are constructed by compound parallelogram flexures and compound bridge-type displacement amplifiers. The two limbs are assembled in a parallel and stacked manner to achieve a compact stage with the merits of parallel kinematics. Analytical models for the mechanical performance assessment of the stage in terms of kinematics, statics, stiffness, load capacity, and dynamics are derived and verified with finite element analysis. A prototype of the XY stage is then fabricated, and its decoupling property is tested. Moreover, the Bouc-Wen hysteresis model of the system is identified by resorting to particle swarm optimization, and a control scheme combining the inverse hysteresis model-based feedforward with feedback control is employed to compensate for the plant nonlinearity and uncertainty. Experimental results reveal that a submicrometer accuracy single-axis motion tracking and biaxial contouring can be achieved by the micropositioning system, which validate the effectiveness of the proposed mechanism and controller designs as well.",
"title": ""
},
{
"docid": "56bd18820903da1917ca5d194b520413",
"text": "The problem of identifying subtle time-space clustering of dis ease, as may be occurring in leukemia, is described and reviewed. Published approaches, generally associated with studies of leuke mia, not dependent on knowledge of the underlying population for their validity, are directed towards identifying clustering by establishing a relationship between the temporal and the spatial separations for the n(n —l)/2 possible pairs which can be formed from the n observed cases of disease. Here it is proposed that statistical power can be improved by applying a reciprocal trans form to these separations. While a permutational approach can give valid probability levels for any observed association, for reasons of practicability, it is suggested that the observed associa tion be tested relative to its permutational variance. Formulas and computational procedures for doing so are given. While the distance measures between points represent sym metric relationships subject to mathematical and geometric regu larities, the variance formula developed is appropriate for ar bitrary relationships. Simplified procedures are given for the ease of symmetric and skew-symmetric relationships. The general pro cedure is indicated as being potentially useful in other situations as, for example, the study of interpersonal relationships. Viewing the procedure as a regression approach, the possibility for extend ing it to nonlinear and mult ¡variatesituations is suggested. Other aspects of the problem and of the procedure developed are discussed.",
"title": ""
},
{
"docid": "86d705256c19f63dac90162b33818a9b",
"text": "Despite the recent success of deep-learning based semantic segmentation, deploying a pre-trained road scene segmenter to a city whose images are not presented in the training set would not achieve satisfactory performance due to dataset biases. Instead of collecting a large number of annotated images of each city of interest to train or refine the segmenter, we propose an unsupervised learning approach to adapt road scene segmenters across different cities. By utilizing Google Street View and its timemachine feature, we can collect unannotated images for each road scene at different times, so that the associated static-object priors can be extracted accordingly. By advancing a joint global and class-specific domain adversarial learning framework, adaptation of pre-trained segmenters to that city can be achieved without the need of any user annotation or interaction. We show that our method improves the performance of semantic segmentation in multiple cities across continents, while it performs favorably against state-of-the-art approaches requiring annotated training data.",
"title": ""
},
{
"docid": "ce429bbed5895731c9a3a9b77e3f488b",
"text": "[Purpose] This study assessed the relationships between the ankle dorsiflexion range of motion and foot and ankle strength. [Subjects and Methods] Twenty-nine healthy (young adults) volunteers participated in this study. Each participant completed tests for ankle dorsiflexion range of motion, hallux flexor strength, and ankle plantar and dorsiflexor strength. [Results] The results showed (1) a moderate correlation between ankle dorsiflexor strength and dorsiflexion range of motion and (2) a moderate correlation between ankle dorsiflexor strength and first toe flexor muscle strength. Ankle dorsiflexor strength is the main contributor ankle dorsiflexion range of motion to and first toe flexor muscle strength. [Conclusion] Ankle dorsiflexion range of motion can play an important role in determining ankle dorsiflexor strength in young adults.",
"title": ""
},
{
"docid": "01a95065526771523795494c9968efb9",
"text": "Depression is one of the most common and debilitating psychiatric disorders and is a leading cause of suicide. Most people who become depressed will have multiple episodes, and some depressions are chronic. Persons with bipolar disorder will also have manic or hypomanic episodes. Given the recurrent nature of the disorder, it is important not just to treat the acute episode, but also to protect against its return and the onset of subsequent episodes. Several types of interventions have been shown to be efficacious in treating depression. The antidepressant medications are relatively safe and work for many patients, but there is no evidence that they reduce risk of recurrence once their use is terminated. The different medication classes are roughly comparable in efficacy, although some are easier to tolerate than are others. About half of all patients will respond to a given medication, and many of those who do not will respond to some other agent or to a combination of medications. Electro-convulsive therapy is particularly effective for the most severe and resistant depressions, but raises concerns about possible deleterious effects on memory and cognition. It is rarely used until a number of different medications have been tried. Although it is still unclear whether traditional psychodynamic approaches are effective in treating depression, interpersonal psychotherapy (IPT) has fared well in controlled comparisons with medications and other types of psychotherapies. It also appears to have a delayed effect that improves the quality of social relationships and interpersonal skills. It has been shown to reduce acute distress and to prevent relapse and recurrence so long as it is continued or maintained. Treatment combining IPT with medication retains the quick results of pharmacotherapy and the greater interpersonal breadth of IPT, as well as boosting response in patients who are otherwise more difficult to treat. The main problem is that IPT has only recently entered clinical practice and is not widely available to those in need. Cognitive behavior therapy (CBT) also appears to be efficacious in treating depression, and recent studies suggest that it can work for even severe depressions in the hands of experienced therapists. Not only can CBT relieve acute distress, but it also appears to reduce risk for the return of symptoms as long as it is continued or maintained. Moreover, it appears to have an enduring effect that reduces risk for relapse or recurrence long after treatment is over. Combined treatment with medication and CBT appears to be as efficacious as treatment with medication alone and to retain the enduring effects of CBT. There also are indications that the same strategies used to reduce risk in psychiatric patients following successful treatment can be used to prevent the initial onset of depression in persons at risk. More purely behavioral interventions have been studied less than the cognitive therapies, but have performed well in recent trials and exhibit many of the benefits of cognitive therapy. Mood stabilizers like lithium or the anticonvulsants form the core treatment for bipolar disorder, but there is a growing recognition that the outcomes produced by modern pharmacology are not sufficient. Both IPT and CBT show promise as adjuncts to medication with such patients. The same is true for family-focused therapy, which is designed to reduce interpersonal conflict in the family. Clearly, more needs to be done with respect to treatment of the bipolar disorders. Good medical management of depression can be hard to find, and the empirically supported psychotherapies are still not widely practiced. As a consequence, many patients do not have access to adequate treatment. Moreover, not everyone responds to the existing interventions, and not enough is known about what to do for people who are not helped by treatment. Although great strides have been made over the past few decades, much remains to be done with respect to the treatment of depression and the bipolar disorders.",
"title": ""
},
{
"docid": "e9ea3dd59bb3ab6bd698b44c993a8b0e",
"text": "We present an optical flow algorithm for large displacement motions. Most existing optical flow methods use the standard coarse-to-fine framework to deal with large displacement motions which has intrinsic limitations. Instead, we formulate the motion estimation problem as a motion segmentation problem. We use approximate nearest neighbor fields to compute an initial motion field and use a robust algorithm to compute a set of similarity transformations as the motion candidates for segmentation. To account for deviations from similarity transformations, we add local deformations in the segmentation process. We also observe that small objects can be better recovered using translations as the motion candidates. We fuse the motion results obtained under similarity transformations and under translations together before a final refinement. Experimental validation shows that our method can successfully handle large displacement motions. Although we particularly focus on large displacement motions in this work, we make no sacrifice in terms of overall performance. In particular, our method ranks at the top of the Middlebury benchmark.",
"title": ""
}
] |
scidocsrr
|
441a8cccfe1b05140b8bed527e8a2359
|
Building a Recommender Agent for e-Learning Systems
|
[
{
"docid": "323113ab2bed4b8012f3a6df5aae63be",
"text": "Clustering data generally involves some input parameters or heuristics that are usually unknown at the time they are needed. We discuss the general problem of parameters in clustering and present a new approach, TURN, based on boundary detection and apply it to the clustering of web log data. We also present the use of di erent lters on the web log data to focus the clustering results and discuss di erent coeÆcients for de ning similarity in a non-Euclidean space.",
"title": ""
}
] |
[
{
"docid": "d297360f609e4b03c9d70fda7cc04123",
"text": "This paper describes an FPGA implementation of a single-precision floating-point multiply-accumulator (FPMAC) that supports single-cycle accumulation while maintaining high clock frequencies. A non-traditional internal representation reduces the cost of mantissa alignment within the accumulator. The FPMAC is evaluated on an Altera Stratix III FPGA.",
"title": ""
},
{
"docid": "35981768a2a46c2dd9d52ebbd5b63750",
"text": "A vehicle detection and classification system has been developed based on a low-cost triaxial anisotropic magnetoresistive sensor. Considering the characteristics of vehicle magnetic detection signals, especially the signals for low-speed congested traffic in large cities, a novel fixed threshold state machine algorithm based on signal variance is proposed to detect vehicles within a single lane and segment the vehicle signals effectively according to the time information of vehicles entering and leaving the sensor monitoring area. In our experiments, five signal features are extracted, including the signal duration, signal energy, average energy of the signal, ratio of positive and negative energy of x-axis signal, and ratio of positive and negative energy of y-axis signal. Furthermore, the detected vehicles are classified into motorcycles, two-box cars, saloon cars, buses, and Sport Utility Vehicle commercial vehicles based on a classification tree model. The experimental results have shown that the detection accuracy of the proposed algorithm can reach up to 99.05% and the average classification accuracy is 93.66%, which verify the effectiveness of our algorithm for low-speed congested traffic.",
"title": ""
},
{
"docid": "176cf87aa657a5066a02bfb650532070",
"text": "Structural Design of Reinforced Concrete Tall Buildings Author: Ali Sherif S. Rizk, Director, Dar al-Handasah Shair & Partners Subject: Structural Engineering",
"title": ""
},
{
"docid": "02c687cbe7961f082c60fad1cc3f3f80",
"text": "The simplicity of Transpose Jacobian (TJ) control is a significant characteristic of this algorithm for controlling robotic manipulators. Nevertheless, a poor performance may result in tracking of fast trajectories, since it is not dynamics-based. Use of high gains can deteriorate performance seriously in the presence of feedback measurement noise. Another drawback is that there is no prescribed method of selecting its control gains. In this paper, based on feedback linearization approach a Modified TJ (MTJ) algorithm is presented which employs stored data of the control command in the previous time step, as a learning tool to yield improved performance. The gains of this new algorithm can be selected systematically, and do not need to be large, hence the noise rejection characteristics of the algorithm are improved. Based on Lyapunov’s theorems, it is shown that both the standard and the MTJ algorithms are asymptotically stable. Analysis of the required computational effort reveals the efficiency of the proposed MTJ law compared to the Model-based algorithms. Simulation results are presented which compare tracking performance of the MTJ algorithm to that of the TJ and Model-Based algorithms in various tasks. Results of these simulations show that performance of the new MTJ algorithm is comparable to that of Computed Torque algorithms, without requiring a priori knowledge of plant dynamics, and with reduced computational burden. Therefore, the proposed algorithm is well suited to most industrial applications where simple efficient algorithms are more appropriate than complicated theoretical ones with massive computational burden. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b22137cbb14396f1dcd24b2a15b02508",
"text": "This paper studies the self-alignment properties between two chips that are stacked on top of each other with copper pillars micro-bumps. The chips feature alignment marks used for measuring the resulting offset after assembly. The accuracy of the alignment is found to be better than 0.5 µm in × and y directions, depending on the process. The chips also feature waveguides and vertical grating couplers (VGC) fabricated in the front-end-of-line (FEOL) and organized in order to realize an optical interconnection between the chips. The coupling of light between the chips is measured and compared to numerical simulation. This high accuracy self-alignment was obtained after studying the impact of flux and fluxless treatments on the wetting of the pads and the successful assembly yield. The composition of the bump surface was analyzed with Time-of-Flight Secondary Ions Mass Spectroscopy (ToF-SIMS) in order to understand the impact of each treatment. This study confirms that copper pillars micro-bumps can be used to self-align photonic integrated circuits (PIC) with another die (for example a microlens array) in order to achieve high throughput alignment of optical fiber to the PIC.",
"title": ""
},
{
"docid": "e4007c7e6a80006238e1211a213e391b",
"text": "Various techniques for multiprogramming parallel multiprocessor systems have been proposed recently as a way to improve performance. A natural approach is to divide the set of processing elements into independent partitions, and simultaneously execute a diierent parallel program in each partition. Several issues arise, including the determination of the optimal number of programs allowed to execute simultaneously (i.e., the number of partitions) and the corresponding partition sizes. This can be done statically, dynamically, or adaptively, depending on the system and workload characteristics. In this paper several adaptive partitioning policies are evaluated. Their behavior, as well as the behavior of static policies, is investigated using real parallel programs. The policy applicability to actual systems is addressed, and implementation results of the proposed policies on an iPSC/2 hypercube system are reported. The concept of robustness (i.e., the ability to perform well on a wide range of workload types over a wide range of arrival rates) is presented and quantiied. Relative rankings of the policies are obtained, depending on the speciic work-load characteristics. A trade-oo is shown between potential performance and the amount of knowledge of the workload characteristics required to select the best policy. A policy that performs best when such knowledge of workload parallelism and/or arrival rate is not available is proposed as the most robust of those analyzed.",
"title": ""
},
{
"docid": "18b3328725661770be1f408f37c7eb64",
"text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.",
"title": ""
},
{
"docid": "a712b6efb5c869619864cd817c2e27e1",
"text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.",
"title": ""
},
{
"docid": "307d9742739cbd2ade98c3d3c5d25887",
"text": "In this paper, we present a smart US imaging system (SMUS) based on an android-OS smartphone, which can provide maximally optimized efficacy in terms of weight and size in point-of-care diagnostic applications. The proposed SMUS consists of the smartphone (Galaxy S5 LTE-A, Samsung., Korea) and a 16-channel probe system. The probe system contains analog and digital front-ends, which conducts beamforming and mid-processing procedures. Otherwise, the smartphone performs the back-end processing including envelope detection, log compression, 2D image filtering, digital scan conversion, and image display with custom-made graphical user interface (GUI). Note that the probe system and smartphone are interconnected by the USB 3.0 protocol. As a result, the developed SMUS can provide real-time B-mode image with the sufficient frame rate (i.e., 58 fps), battery run-time for point-of-care diagnosis (i.e., 54 min), and 35.0°C of transducer surface temperature during B-mode imaging, which satisfies the temperature standards for the safety and effectiveness of medical electrical equipment, IEC 60601-1 (i.e., 43°C).",
"title": ""
},
{
"docid": "4d4a09c7cef74e9be52844a61ca57bef",
"text": "The key of zero-shot learning (ZSL) is how to find the information transfer model for bridging the gap between images and semantic information (texts or attributes). Existing ZSL methods usually construct the compatibility function between images and class labels with the consideration of the relevance on the semantic classes (the manifold structure of semantic classes). However, the relationship of image classes (the manifold structure of image classes) is also very important for the compatibility model construction. It is difficult to capture the relationship among image classes due to unseen classes, so that the manifold structure of image classes often is ignored in ZSL. To complement each other between the manifold structure of image classes and that of semantic classes information, we propose structure propagation (SP) for improving the performance of ZSL for classification. SP can jointly consider the manifold structure of image classes and that of semantic classes for approximating to the intrinsic structure of object classes. Moreover, the SP can describe the constrain condition between the compatibility function and these manifold structures for balancing the influence of the structure propagation iteration. The SP solution provides not only unseen class labels but also the relationship of two manifold structures that encode the positive transfer in structure propagation. Experimental results demonstrate that SP can attain the promising results on the AwA, CUB, Dogs and SUN databases.",
"title": ""
},
{
"docid": "4100a10b2a03f3a1ba712901cee406d2",
"text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.",
"title": ""
},
{
"docid": "b7ca3a123963bb2f0bfbe586b3bc63d0",
"text": "Objective In symptom-dependent diseases such as functional dyspepsia (FD), matching the pattern of epigastric symptoms, including severity, kind, and perception site, between patients and physicians is critical. Additionally, a comprehensive examination of the stomach, duodenum, and pancreas is important for evaluating the origin of such symptoms. Methods FD-specific symptoms (epigastric pain, epigastric burning, early satiety, and postprandial fullness) and other symptoms (regurgitation, nausea, belching, and abdominal bloating) as well as the perception site of the above symptoms were investigated in healthy subjects using a new questionnaire with an illustration of the human body. A total of 114 patients with treatment-resistant dyspeptic symptoms were evaluated for their pancreatic exocrine function using N-benzoyl-L-tyrosyl-p-aminobenzoic acid. Results A total of 323 subjects (men:women, 216:107; mean age, 52.1 years old) were initially enrolled. Most of the subjects felt the FD-specific symptoms at the epigastrium, while about 20% felt them at other abdominal sites. About 30% of expressed as epigastric symptoms were FD-nonspecific symptoms. At the epigastrium, epigastric pain and epigastric burning were mainly felt at the upper part, and postprandial fullness and early satiety were felt at the lower part. The prevalence of patients with pancreatic exocrine dysfunction was 71% in the postprandial fullness group, 68% in the epigastric pain group, and 82% in the diarrhea group. Conclusion We observed mismatch in the perception site and expression between the epigastric symptoms of healthy subjects and FD-specific symptoms. Postprandial symptoms were often felt at the lower part of the epigastrium, and pancreatic exocrine dysfunction may be involved in the FD symptoms, especially for treatment-resistant dyspepsia patients.",
"title": ""
},
{
"docid": "a5e960a4b20959a1b4a85e08eebab9d3",
"text": "This paper presents a new class of dual-, tri- and quad-band BPF by using proposed open stub-loaded shorted stepped-impedance resonator (OSLSSIR). The OSLSSIR consists of a two-end-shorted three-section stepped-impedance resistor (SIR) with two identical open stubs loaded at its impedance junctions. Two 50- Ω tapped lines are directly connected to two shorted sections of the SIR to serve as I/O ports. As the electrical lengths of two identical open stubs increase, many more transmission poles (TPs) and transmission zeros (TZs) can be shifted or excited within the interested frequency range. The TZs introduced by open stubs divide the TPs into multiple groups, which can be applied to design a multiple-band bandpass filter (BPF). In order to increase many more design freedoms for tuning filter performance, a high-impedance open stub and the narrow/broad side coupling are introduced as perturbations in all filters design, which can tune the even- and odd-mode TPs separately. In addition, two branches of I/O coupling and open stub-loaded shorted microstrip line are employed in tri- and quad-band BPF design. As examples, two dual-wideband BPFs, one tri-band BPF, and one quad-band BPF have been successfully developed. The fabricated four BPFs have merits of compact sizes, low insertion losses, and high band-to-band isolations. The measured results are in good agreement with the full-wave simulated results.",
"title": ""
},
{
"docid": "d6f473f6b6758b2243dde898840656b0",
"text": "In this paper, we introduce the new generation 3300V HiPak2 IGBT module (130x190)mm employing the recently developed TSPT+ IGBT with Enhanced Trench MOS technology and Field Charge Extraction (FCE) diode. The new chip-set enables IGBT modules with improved electrical performance in terms of low losses, good controllability, high robustness and soft diode recovery. Due to the lower losses and the excellent SOA, the current rating of the 3300V HiPak2 module can be increased from 1500A for the current SPT+ generation to 1800A for the new TSPT+ version.",
"title": ""
},
{
"docid": "7635d39eda6ac2b3969216b39a1aa1f7",
"text": "We introduce tailored displays that enhance visual acuity by decomposing virtual objects and placing the resulting anisotropic pieces into the subject's focal range. The goal is to free the viewer from needing wearable optical corrections when looking at displays. Our tailoring process uses aberration and scattering maps to account for refractive errors and cataracts. It splits an object's light field into multiple instances that are each in-focus for a given eye sub-aperture. Their integration onto the retina leads to a quality improvement of perceived images when observing the display with naked eyes. The use of multiple depths to render each point of focus on the retina creates multi-focus, multi-depth displays. User evaluations and validation with modified camera optics are performed. We propose tailored displays for daily tasks where using eyeglasses are unfeasible or inconvenient (e.g., on head-mounted displays, e-readers, as well as for games); when a multi-focus function is required but undoable (e.g., driving for farsighted individuals, checking a portable device while doing physical activities); or for correcting the visual distortions produced by high-order aberrations that eyeglasses are not able to.",
"title": ""
},
{
"docid": "69f710a71b27cf46039d54e20b5f589b",
"text": "This paper presents a new needle deflection model that is an extension of prior work in our group based on the principles of beam theory. The use of a long flexible needle in percutaneous interventions necessitates accurate modeling of the generated curved trajectory when the needle interacts with soft tissue. Finding a feasible model is important in simulators with applications in training novice clinicians or in path planners used for needle guidance. Using intra-operative force measurements at the needle base, our approach relates mechanical and geometric properties of needle-tissue interaction to the net amount of deflection and estimates the needle curvature. To this end, tissue resistance is modeled by introducing virtual springs along the needle shaft, and the impact of needle-tissue friction is considered by adding a moving distributed external force to the bending equations. Cutting force is also incorporated by finding its equivalent sub-boundary conditions. Subsequently, the closed-from solution of the partial differential equations governing the planar deflection is obtained using Green's functions. To evaluate the performance of our model, experiments were carried out on artificial phantoms.",
"title": ""
},
{
"docid": "c0f46732345837cf959ea9ee030874fd",
"text": "In this paper we discuss the development and use of low-rank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for large-scale and time-varying datasets.",
"title": ""
},
{
"docid": "481d62df8c6cc7ed6bc93a4e3c27a515",
"text": "Minutiae points are defined as the minute discontinuities of local ridge flows, which are widely used as the fine level features for fingerprint recognition. Accurate minutiae detection is important and traditional methods are often based on the hand-crafted processes such as image enhancement, binarization, thinning and tracing of the ridge flows etc. These methods require strong prior knowledge to define the patterns of minutiae points and are easily sensitive to noises. In this paper, we propose a machine learning based algorithm to detect the minutiae points with the gray fingerprint image based on Convolution Neural Networks (CNN). The proposed approach is divided into the training and testing stages. In the training stage, a number of local image patches are extracted and labeled and CNN models are trained to classify the image patches. The test fingerprint is scanned with the CNN model to locate the minutiae position in the testing stage. To improve the detection accuracy, two CNN models are trained to classify the local patch into minutiae v.s. non-minutiae and into ridge ending v.s. bifurcation, respectively. In addition, multi-scale CNNs are constructed with the image patches of varying sizes and are combined to achieve more accurate detection. Finally, the proposed algorithm is tested the fingerprints of FVC2002 DB1 database. Experimental results and comparisons have been presented to show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "57a48dee2cc149b70a172ac5785afc6c",
"text": "We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ~ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.",
"title": ""
},
{
"docid": "438e690466823b7ae79cf28f62ba87be",
"text": "Decades of research have documented that young word learners have more difficulty learning verbs than nouns. Nonetheless, recent evidence has uncovered conditions under which children as young as 24 months succeed. Here, we focus in on the kind of linguistic information that undergirds 24-month-olds' success. We introduced 24-month-olds to novel words (either nouns or verbs) as they watched dynamic scenes (e.g., a man waving a balloon); the novel words were presented in semantic contexts that were either rich (e.g., The man is pilking a balloon), or more sparse (e.g., He's pilking it). Toddlers successfully learned nouns in both the semantically rich and sparse contexts, but learned verbs only in the rich context. This documents that to learn the meaning of a novel verb, English-acquiring toddlers take advantage of the semantically rich information provided in lexicalized noun phrases. Implications for cross-linguistic theories of acquisition are discussed.",
"title": ""
}
] |
scidocsrr
|
95f38c4d5fdf85d1c0f4bf2e9b159e2a
|
Perspectives on Ontology Learning
|
[
{
"docid": "6209e9f45cf9bb17c13b23e9e45ff45d",
"text": "This paper provides a self-contained first introduction to d escription logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review lightweight DL languages, discuss the relationship to the OWL Web Ontology Language and give pointers to further reading.",
"title": ""
}
] |
[
{
"docid": "01809d609802d949aa8c1604db29419d",
"text": "Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish finegrained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using 20% and 33% less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.",
"title": ""
},
{
"docid": "ff572d9c74252a70a48d4ba377f941ae",
"text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.",
"title": ""
},
{
"docid": "d648cdf8423f3ae447f027feb97b02e1",
"text": "This paper proposes a new idea that uses Wikipedia categories as answer types and defines candidate sets inside Wikipedia. The focus of a given question is searched in the hierarchy of Wikipedia main pages. Our searching strategy combines head-noun matching and synonym matching provided in semantic resources. The set of answer candidates is determined by the entry hierarchy in Wikipedia and the hyponymy hierarchy in WordNet. The experimental results show that the approach can find candidate sets in a smaller size but achieve better performance especially for ARTIFACT and ORGANIZATION types, where the performance is better than state-of-the-art Chinese factoid QA systems.",
"title": ""
},
{
"docid": "5399a605e7a1e0337b65ac9ab768aaf6",
"text": "This paper presents a new method to parsllelize the minimax tree search algorithm. This method is then compared to the “Young Brother Wait Concept” algorithm in an Othello program implementation and in a Chess program. Results of tests done on a 32-node CM5 and a 12%node CRAY T3D computers are given.",
"title": ""
},
{
"docid": "55f0aa6a21e4976dc4705b037fd82a11",
"text": "Dynamic topic models (DTMs) are very effective in discovering topics and capturing their evolution trends in time series data. To do posterior inference of DTMs, existing methods are all batch algorithms that scan the full dataset before each update of the model and make inexact variational approximations with mean-field assumptions. Due to a lack of a more scalable inference algorithm, despite the usefulness, DTMs have not captured large topic dynamics. This paper fills this research void, and presents a fast and parallelizable inference algorithm using Gibbs Sampling with Stochastic Gradient Langevin Dynamics that does not make any unwarranted assumptions. We also present a Metropolis-Hastings based O(1) sampler for topic assignments for each word token. In a distributed environment, our algorithm requires very little communication between workers during sampling (almost embarrassingly parallel) and scales up to large-scale applications. We are able to learn the largest Dynamic Topic Model to our knowledge, and learned the dynamics of 1,000 topics from 2.6 million documents in less than half an hour, and our empirical results show that our algorithm is not only orders of magnitude faster than the baselines but also achieves lower perplexity.",
"title": ""
},
{
"docid": "71b0dbd905c2a9f4111dfc097bfa6c67",
"text": "In this paper, the authors undertake a study of cyber warfare reviewing theories, law, policies, actual incidents and the dilemma of anonymity. Starting with the United Kingdom perspective on cyber warfare, the authors then consider United States' views including the perspective of its military on the law of war and its general inapplicability to cyber conflict. Consideration is then given to the work of the United Nations' group of cyber security specialists and diplomats who as of July 2010 have agreed upon a set of recommendations to the United Nations Secretary General for negotiations on an international computer security treaty. An examination of the use of a nation's cybercrime law to prosecute violations that occur over the Internet indicates the inherent limits caused by the jurisdictional limits of domestic law to address cross-border cybercrime scenarios. Actual incidents from Estonia (2007), Georgia (2008), Republic of Korea (2009), Japan (2010), ongoing attacks on the United States as well as other incidents and reports on ongoing attacks are considered as well. Despite the increasing sophistication of such cyber attacks, it is evident that these attacks were met with a limited use of law and policy to combat them that can be only be characterised as a response posture defined by restraint. Recommendations are then examined for overcoming the attribution problem. The paper then considers when do cyber attacks rise to the level of an act of war by reference to the work of scholars such as Schmitt and Wingfield. Further evaluation of the special impact that non-state actors may have and some theories on how to deal with the problem of asymmetric players are considered. Discussion and possible solutions are offered. A conclusion is offered drawing some guidance from the writings of the Chinese philosopher Sun Tzu. Finally, an appendix providing a technical overview of the problem of attribution and the dilemma of anonymity in cyberspace is provided. 1. The United Kingdom Perspective \"If I went and bombed a power station in France, that would be an act of war. If I went on to the net and took out a power station, is that an act of war? One",
"title": ""
},
{
"docid": "34557bc145ccd6d83edfc80da088f690",
"text": "This thesis is dedicated to my mother, who taught me that success is not the key to happiness. Happiness is the key to success. If we love what we are doing, we will be successful. This thesis is dedicated to my father, who taught me that luck is not something that is given to us at random and should be waited for. Luck is the sense to recognize an opportunity and the ability to take advantage of it. iii ACKNOWLEDGEMENTS I would like to thank my thesis committee –",
"title": ""
},
{
"docid": "d2d7595f04af96d7499d7b7c06ba2608",
"text": "Deep Neural Network (DNN) is a widely used deep learning technique. How to ensure the safety of DNN-based system is a critical problem for the research and application of DNN. Robustness is an important safety property of DNN. However, existing work of verifying DNN’s robustness is timeconsuming and hard to scale to large-scale DNNs. In this paper, we propose a boosting method for DNN robustness verification, aiming to find counter-examples earlier. Our observation is DNN’s different inputs have different possibilities of existing counter-examples around them, and the input with a small difference between the largest output value and the second largest output value tends to be the achilles’s heel of the DNN. We have implemented our method and applied it on Reluplex, a state-ofthe-art DNN verification tool, and four DNN attacking methods. The results of the extensive experiments on two benchmarks indicate the effectiveness of our boosting method.",
"title": ""
},
{
"docid": "f3c60d98d521ac7853bde863808e8930",
"text": "In recent years cybersecurity has gained prominence as a field of expertise and the relevant practical skills are in high demand. To reduce the cost and amount of dedicated hardware required to set up a cybersecurity lab to teach those skills, several virtualization and outsourcing approaches were developed but the resulting setup has often increased in total complexity, hampering adoption. In this paper we present a very simple (and therefore highly scalable) setup that incorporates state-of-the-art industry tools. We also describe a structured set of lab assignments developed for this setup that build one on top of the other to cover the material of a semester-long Cybersecurity course taught at Boston University. We explore alternative lab architectures, discuss other existing sets of lab assignments and present some ideas for further improvement.",
"title": ""
},
{
"docid": "33ce6e07bc4031f1b915e32769d5c984",
"text": "MOTIVATION\nDIYABC is a software package for a comprehensive analysis of population history using approximate Bayesian computation on DNA polymorphism data. Version 2.0 implements a number of new features and analytical methods. It allows (i) the analysis of single nucleotide polymorphism data at large number of loci, apart from microsatellite and DNA sequence data, (ii) efficient Bayesian model choice using linear discriminant analysis on summary statistics and (iii) the serial launching of multiple post-processing analyses. DIYABC v2.0 also includes a user-friendly graphical interface with various new options. It can be run on three operating systems: GNU/Linux, Microsoft Windows and Apple Os X.\n\n\nAVAILABILITY\nFreely available with a detailed notice document and example projects to academic users at http://www1.montpellier.inra.fr/CBGP/diyabc CONTACT: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "fa94ee5e70d030270f317093b852a4e1",
"text": "This paper studies how home wireless performance characteristics affect the performance of user traffic in real homes. Previous studies have focused either on wireless metrics exclusively, without connection to the performance of user traffic; or on the performance of the home network at higher layers. In contrast, we deploy a passive measurement tool on commodity access points to correlate wireless performance metrics with TCP performance of user traffic. We implement our measurement tool, deploy it on commodity routers in 66 homes for one month, and study the relationship between wireless metrics and TCP performance of user traffic. We find that, most of the time, TCP flows from devices in the home achieve only a small fraction of available access link throughput; as the throughput of user traffic approaches the access link throughput, the characteristics of the home wireless network more directly affect performance. We also find that the 5 GHz band offers users better performance better than the 2.4 GHz band, and although the performance of devices varies within the same home, many homes do not have multiple devices sending high traffic volumes, implying that certain types of wireless contention may be uncommon in practice.",
"title": ""
},
{
"docid": "868f0b672676736dbf5688822fcb9ba3",
"text": "Inspired by the attractive Flops/dollar ratio and the incredible growth in the speed of modern graphics processing units (GPUs), we propose to use a cluster of GPUs for high performance scientific computing. As an example application, we have developed a parallel flow simulation using the lattice Boltzmann model (LBM) on a GPU cluster and have simulated the dispersion of airborne contaminants in the Times Square area of New York City. Using 30 GPU nodes, our simulation can compute a 480x400x80 LBM in 0.31second/step, a speed which is 4.6 times faster than that of our CPU cluster implementation. Besides the LBM, we also discuss other potential applications of the GPU cluster, such as cellular automata, PDE solvers, and FEM.",
"title": ""
},
{
"docid": "55a1bedc3aa007a4e8bbc77d6f710d7f",
"text": "The purpose of the present study was to develop and validate a self-report instrument that measures the nature of the coach-athlete relationship. Jowett et al.'s (Jowett & Meek, 2000; Jowett, in press) qualitative case studies and relevant literature were used to generate items for an instrument that measures affective, cognitive, and behavioral aspects of the coach-athlete relationship. Two studies were carried out in an attempt to assess content, predictive, and construct validity, as well as internal consistency, of the Coach-Athlete Relationship Questionnaire (CART-Q), using two independent British samples. Principal component analysis and confirmatory factor analysis were used to reduce the number of items, identify principal components, and confirm the latent structure of the CART-Q. Results supported the multidimensional nature of the coach-athlete relationship. The latent structure of the CART-Q was underlined by the latent variables of coaches' and athletes' Closeness (emotions), Commitment (cognitions), and Complementarity (behaviors).",
"title": ""
},
{
"docid": "255de21131ccf74c3269cc5e7c21820b",
"text": "This paper discusses the effect of driving current on frequency response of the two types of light emitting diodes (LEDs), namely, phosphor-based LED and single color LED. The experiments show that the influence of the change of driving current on frequency response of phosphor-based LED is not obvious compared with the single color LED(blue, red and green). The experiments also find that the bandwidth of the white LED was expanded from 1MHz to 32MHz by the pre-equalization strategy and 26Mbit/s transmission speed was taken under Bit Error Ratio of 7.55×10-6 within 3m by non-return-to-zero on-off-keying modulation. Especially, the frequency response intensity of the phosphor-based LED is little influenced by the fluctuation of the driving current, which meets the requirements that the indoor light source needs to be adjusted in real-time by driving current. As the bandwidth of the single color LED is changed by the driving current obviously, the LED modulation bandwidth should be calculated according to the minimum driving current while we consider the requirement of the VLC transmission speed.",
"title": ""
},
{
"docid": "740b783d840a706992dc6977a918f1f1",
"text": "Inadequate curriculum for software engineering is considered to be one of the most common software risks. A number of solutions, on improving Software Engineering Education (SEE) have been reported in literature but there is a need to collectively present these solutions at one place. We have performed a mapping study to present a broad view of literature; published on improving the current state of SEE. Our aim is to give academicians, practitioners and researchers an international view of the current state of SEE. Our study has identified 70 primary studies that met our selection criteria, which we further classified and categorized in a well-defined Software Engineering educational framework. We found that the most researched category within the SE educational framework is Innovative Teaching Methods whereas the least amount of research was found in Student Learning and Assessment category. Our future work is to conduct a Systematic Literature Review on SEE. Keywords—Mapping Study, Software Engineering, Software Engineering Education, Literature Survey.",
"title": ""
},
{
"docid": "b1422b2646f02a5a84a6a4b13f5ae7d8",
"text": "Two experiments examined the influence of timbre on auditory stream segregation. In experiment 1, listeners heard sequences of orchestral tones equated for pitch and loudness, and they rated how strongly the instruments segregated. Multidimensional scaling analyses of these ratings revealed that segregation was based on the static and dynamic acoustic attributes that influenced similarity judgements in a previous experiment (P Iverson & CL Krumhansl, 1993). In Experiment 2, listeners heard interleaved melodies and tried to recognize the melodies played by a target timbre. The results extended the findings of Experiment 1 to tones varying pitch. Auditory stream segregation appears to be influenced by gross differences in static spectra and by dynamic attributes, including attack duration and spectral flux. These findings support a gestalt explanation of stream segregation and provide evidence against peripheral channel model.",
"title": ""
},
{
"docid": "648a5479933eb4703f1d2639e0c3b5c7",
"text": "The Surgery Treatment Modality Committee of the Korean Gynecologic Oncologic Group (KGOG) has determined to develop a surgical manual to facilitate clinical trials and to improve communication between investigators by standardizing and precisely describing operating procedures. The literature on anatomic terminology, identification of surgical components, and surgical techniques were reviewed and discussed in depth to develop a surgical manual for gynecologic oncology. The surgical procedures provided here represent the minimum requirements for participating in a clinical trial. These procedures should be described in the operation record form, and the pathologic findings obtained from the procedures should be recorded in the pathologic report form. Here, we focused on radical hysterectomy and lymphadenectomy, and we developed a KGOG classification for those conditions.",
"title": ""
},
{
"docid": "699ef9eecd9d7fbef01930915c3480f0",
"text": "Disassembly of the cone-shaped HIV-1 capsid in target cells is a prerequisite for establishing a life-long infection. This step in HIV-1 entry, referred to as uncoating, is critical yet poorly understood. Here we report a novel strategy to visualize HIV-1 uncoating using a fluorescently tagged oligomeric form of a capsid-binding host protein cyclophilin A (CypA-DsRed), which is specifically packaged into virions through the high-avidity binding to capsid (CA). Single virus imaging reveals that CypA-DsRed remains associated with cores after permeabilization/removal of the viral membrane and that CypA-DsRed and CA are lost concomitantly from the cores in vitro and in living cells. The rate of loss is modulated by the core stability and is accelerated upon the initiation of reverse transcription. We show that the majority of single cores lose CypA-DsRed shortly after viral fusion, while a small fraction remains intact for several hours. Single particle tracking at late times post-infection reveals a gradual loss of CypA-DsRed which is dependent on reverse transcription. Uncoating occurs both in the cytoplasm and at the nuclear membrane. Our novel imaging assay thus enables time-resolved visualization of single HIV-1 uncoating in living cells, and reveals the previously unappreciated spatio-temporal features of this incompletely understood process.",
"title": ""
},
{
"docid": "14835b93b580081b0398e5e370b72c2c",
"text": "In order for autonomous vehicles to achieve life-long operation in outdoor environments, navigation systems must be able to cope with visual change—whether it’s short term, such as variable lighting or weather conditions, or long term, such as different seasons. As a Global Positioning System (GPS) is not always reliable, autonomous vehicles must be self sufficient with onboard sensors. This thesis examines the problem of localisation against a known map across extreme lighting and weather conditions using only a stereo camera as the primary sensor. The method presented departs from traditional techniques that blindly apply out-of-the-box interest-point detectors to all images of all places. This naive approach fails to take into account any prior knowledge that exists about the environment in which the robot is operating. Furthermore, the point-feature approach often fails when there are dramatic appearance changes, as associating low-level features such as corners or edges is extremely difficult and sometimes not possible. By leveraging knowledge of prior appearance, this thesis presents an unsupervised method for learning a set of distinctive and stable (i.e., stable under appearance changes) feature detectors that are unique to a specific place in the environment. In other words, we learn place-dependent feature detectors that enable vastly superior performance in terms of robustness in exchange for a reduced, but tolerable metric precision. By folding in a method for masking distracting objects in dynamic environments and examining a simple model for external illuminates, such as the sun, this thesis presents a robust localisation system that is able to achieve metric estimates from night-today or summer-to-winter conditions. Results are presented from various locations in the UK, including the Begbroke Science Park, Woodstock, Oxford, and central London. Statement of Authorship This thesis is submitted to the Department of Engineering Science, University of Oxford, in fulfilment of the requirements for the degree of Doctor of Philosophy. This thesis is entirely my own work, and except where otherwise stated, describes my own research. Colin McManus, Lady Margaret Hall Funding The work described in this thesis was funded by Nissan Motors.",
"title": ""
},
{
"docid": "d4fa5b9d4530b12a394c1e98ea2793b1",
"text": "Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.",
"title": ""
}
] |
scidocsrr
|
9cf359b6932bcf61505b74ba3c1c5d7b
|
Lessons from the Amazon Picking Challenge: Four Aspects of Building Robotic Systems
|
[
{
"docid": "f670b91f8874c2c2db442bc869889dbd",
"text": "This paper summarizes lessons learned from the first Amazon Picking Challenge in which 26 international teams designed robotic systems that competed to retrieve items from warehouse shelves. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned. Note to Practitioners: Abstract—Perception, motion planning, grasping, and robotic system engineering has reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semi-structured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "50d22974ef09d0f02ee05d345e434055",
"text": "We present the exploring/exploiting tree (EET) algorithm for motion planning. The EET planner deliberately trades probabilistic completeness for computational efficiency. This tradeoff enables the EET planner to outperform state-of-the-art sampling-based planners by up to three orders of magnitude. We show that these considerable speedups apply for a variety of challenging real-world motion planning problems. The performance improvements are achieved by leveraging work space information to continuously adjust the sampling behavior of the planner. When the available information captures the planning problem's inherent structure, the planner's sampler becomes increasingly exploitative. When the available information is less accurate, the planner automatically compensates by increasing local configuration space exploration. We show that active balancing of exploration and exploitation based on workspace information can be a key ingredient to enabling highly efficient motion planning in practical scenarios.",
"title": ""
}
] |
[
{
"docid": "35cbd0797156630e7b3edf7cf76868c1",
"text": "Given a bipartite graph of users and the products that they review, or followers and followees, how can we detect fake reviews or follows? Existing fraud detection methods (spectral, etc.) try to identify dense subgraphs of nodes that are sparsely connected to the remaining graph. Fraudsters can evade these methods using camouflage, by adding reviews or follows with honest targets so that they look “normal.” Even worse, some fraudsters use hijacked accounts from honest users, and then the camouflage is indeed organic.\n Our focus is to spot fraudsters in the presence of camouflage or hijacked accounts. We propose FRAUDAR, an algorithm that (a) is camouflage resistant, (b) provides upper bounds on the effectiveness of fraudsters, and (c) is effective in real-world data. Experimental results under various attacks show that FRAUDAR outperforms the top competitor in accuracy of detecting both camouflaged and non-camouflaged fraud. Additionally, in real-world experiments with a Twitter follower--followee graph of 1.47 billion edges, FRAUDAR successfully detected a subgraph of more than 4, 000 detected accounts, of which a majority had tweets showing that they used follower-buying services.",
"title": ""
},
{
"docid": "4071b0a0f3887a5ad210509e6ad5498a",
"text": "Nowadays, the IoT is largely dependent on sensors. The IoT devices are embedded with sensors and have the ability to communicate. A variety of sensors play a key role in networked devices in IoT. In order to facilitate the management of such sensors, this paper investigates how to use SNMP protocol, which is widely used in network device management, to implement sensors information management of IoT system. The principles and implement details to setup the MIB file, agent and manager application are discussed. A prototype system is setup to validate our methods. The test results show that because of its easy use and strong expansibility, SNMP is suitable and a bright way for sensors information management of IoT system.",
"title": ""
},
{
"docid": "2f901bcc774a104db449e38fd8ebb3c4",
"text": "Web service composition concerns the building of new value added services by integrating the sets of existing web services. Due to the seamless proliferation of web services, it becomes difficult to find a suitable web service that satisfies the requirements of users during web service composition. This paper systematically reviews existing research on QoS-aware web service composition using computational intelligence techniques (published between 2005 and 2015). This paper develops a classification of research approaches on computational intelligence based QoS-aware web service composition and describes future research directions in this area. In particular, the results of this study confirms that new meta-heuristic algorithms have not yet been applied for solving QoS-aware web services composition.",
"title": ""
},
{
"docid": "dfb3a6fea5c2b12e7865f8b6664246fb",
"text": "We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects. This version, called cumulative prospect theory, applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses. Two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting functions. A review of the experimental evidence and the results of a new experiment confirm a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability. Expected utility theory reigned for several decades as the dominant normative and descriptive model of decision making under uncertainty, but it has come under serious question in recent years. There is now general agreement that the theory does not provide an adequate description of individual choice: a substantial body of evidence shows that decision makers systematically violate its basic tenets. Many alternative models have been proposed in response to this empirical challenge (for reviews, see Camerer, 1989; Fishburn, 1988; Machina, 1987). Some time ago we presented a model of choice, called prospect theory, which explained the major violations of expected utility theory in choices between risky prospects with a small number of outcomes (Kahneman and Tversky, 1979; Tversky and Kahneman, 1986). The key elements of this theory are 1) a value function that is concave for gains, convex for losses, and steeper for losses than for gains, *An earlier version of this article was entitled \"Cumulative Prospect Theory: An Analysis of Decision under Uncertainty.\" This article has benefited from discussions with Colin Camerer, Chew Soo-Hong, David Freedman, and David H. Krantz. We are especially grateful to Peter P. Wakker for his invaluable input and contribution to the axiomatic analysis. We are indebted to Richard Gonzalez and Amy Hayes for running the experiment and analyzing the data. This work was supported by Grants 89-0064 and 88-0206 from the Air Force Office of Scientific Research, by Grant SES-9109535 from the National Science Foundation, and by the Sloan Foundation. 298 AMOS TVERSKY/DANIEL KAHNEMAN and 2) a nonlinear transformation of the probability scale, which overweights small probabilities and underweights moderate and high probabilities. In an important later development, several authors (Quiggin, 1982; Schmeidler, 1989; Yaari, 1987; Weymark, 1981) have advanced a new representation, called the rank-dependent or the cumulative functional, that transforms cumulative rather than individual probabilities. This article presents a new version of prospect theory that incorporates the cumulative functional and extends the theory to uncertain as well to risky prospects with any number of outcomes. The resulting model, called cumulative prospect theory, combines some of the attractive features of both developments (see also Luce and Fishburn, 1991). It gives rise to different evaluations of gains and losses, which are not distinguished in the standard cumulative model, and it provides a unified treatment of both risk and uncertainty. To set the stage for the present development, we first list five major phenomena of choice, which violate the standard model and set a minimal challenge that must be met by any adequate descriptive theory of choice. All these findings have been confirmed in a number of experiments, with both real and hypothetical payoffs. Framing effects. The rational theory of choice assumes description invariance: equivalent formulations of a choice problem should give rise to the same preference order (Arrow, 1982). Contrary to this assumption, there is much evidence that variations in the framing of options (e.g., in terms of gains or losses) yield systematically different preferences (Tversky and Kahneman, 1986). Nonlinear preferences. According to the expectation principle, the utility of a risky prospect is linear in outcome probabilities. Allais's (1953) famous example challenged this principle by showing that the difference between probabilities of .99 and 1.00 has more impact on preferences than the difference between 0.10 and 0.11. More recent studies observed nonlinear preferences in choices that do not involve sure things (Camerer and Ho, 1991). Source dependence. People's willingness to bet on an uncertain event depends not only on the degree of uncertainty but also on its source. Ellsberg (1961) observed that people prefer to bet on an urn containing equal numbers of red and green balls, rather than on an urn that contains red and green balls in unknown proportions. More recent evidence indicates that people often prefer a bet on an event in their area of competence over a bet on a matched chance event, although the former probability is vague and the latter is clear (Heath and Tversky, 1991). Risk seeking. Risk aversion is generally assumed in economic analyses of decision under uncertainty. However, risk-seeking choices are consistently observed in two classes of decision problems. First, people often prefer a small probability of winning a large prize over the expected value of that prospect. Second, risk seeking is prevalent when people must choose between a sure loss and a substantial probability of a larger loss. Loss' aversion. One of the basic phenomena of choice under both risk and uncertainty is that losses loom larger than gains (Kahneman and Tversky, 1984; Tversky and Kahneman, 1991). The observed asymmetry between gains and losses is far too extreme to be explained by income effects or by decreasing risk aversion. ADVANCES IN PROSPECT THEORY 299 The present development explains loss aversion, risk seeking, and nonlinear preferences in terms of the value and the weighting functions. It incorporates a framing process, and it can accommodate source preferences. Additional phenomena that lie beyond the scope of the theory--and of its alternatives--are discussed later. The present article is organized as follows. Section 1.1 introduces the (two-part) cumulative functional; section 1.2 discusses relations to previous work; and section 1.3 describes the qualitative properties of the value and the weighting functions. These properties are tested in an extensive study of individual choice, described in section 2, which also addresses the question of monetary incentives. Implications and limitations of the theory are discussed in section 3. An axiomatic analysis of cumulative prospect theory is presented in the appendix.",
"title": ""
},
{
"docid": "ed5185ea36f61a9216c6f0183b81d276",
"text": "Blockchain technology enables the creation of a decentralized environment where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders such as companies, institutions and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage and control ECTX tokens, which represent credits that students gain for completed courses such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step towards a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative which anticipates that various HEIs would join forces in order to create a globally efficient, simplified and ubiquitous environment in order to avoid language and administrative barriers. Therefore we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.",
"title": ""
},
{
"docid": "9a58dc3eada29c2b929c4442ce0ac025",
"text": "Gamification is the application of game elements and game design techniques in non-game contexts to engage and motivate people to achieve their goals. Motivation is an essential requirement for effective and efficient collaboration, which is particularly challenging when people work distributedly. In this paper, we discuss the topics of collaboration, motivation, and gamification in the context of software engineering. We then introduce our long-term research goal—building a theoretical framework that defines how gamification can be used as a collaboration motivator for virtual software teams. We also highlight the roles that social and cultural issues might play in understanding the phenomenon. Finally, we give an overview of our proposed research method to foster discussion during the workshop on how to best investigate the topic. Author",
"title": ""
},
{
"docid": "3cc6d54cb7a8507473f623a149c3c64b",
"text": "The measurement of loyalty is a topic of great interest for the marketing academic literature. The relation that loyalty has with the results of organizations has been tested by numerous studies and the search to retain profitable customers has become a maxim in firm management. Tourist destinations have not remained oblivious to this trend. However, the difficulty involved in measuring the loyalty of a tourist destination is a brake on its adoption by those in charge of destination management. The usefulness of measuring loyalty lies in being able to apply strategies which enable improving it, but that also impact on the enhancement of the organization’s results. The study of tourists’ loyalty to a destination is considered relevant for the literature and from the point of view of the management of the multiple actors involved in the tourist activity. Based on these considerations, this work proposes a synthetic indictor that allows the simple measurement of the tourist’s loyalty. To do so, we used as a starting point Best’s (2007) customer loyalty index adapted to the case of tourist destinations. We also employed a variable of results – the tourist’s overnight stays in the destination – to create a typology of customers according to their levels of loyalty and the number of their overnight stays. The data were obtained from a survey carried out with 2373 tourists of the city of Seville. In accordance with the results attained, the proposal of the synthetic indicator to measure tourist loyalty is viable, as it is a question of a simple index constructed from easily obtainable data. Furthermore, four groups of tourists have been identified, according to their degree of loyalty and profitability, using the number of overnight stays of the tourists in their visit to the destination. The study’s main contribution stems from the possibility of simply measuring loyalty and from establishing four profiles of tourists for which marketing strategies of differentiated relations can be put into practice and that contribute to the improvement of the destination’s results. © 2018 Journal of Innovation & Knowledge. Published by Elsevier España, S.L.U. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/",
"title": ""
},
{
"docid": "83651ca357b0f978400de4184be96443",
"text": "The most common temporomandibular joint (TMJ) pathologic disease is anterior-medial displacement of the articular disk, which can lead to TMJ-related symptoms.The indication for disk repositioning surgery is irreversible TMJ damage associated with temporomandibular pain. We describe a surgical technique using a preauricular approach with a high condylectomy to reshape the condylar head. The disk is anchored with a bioabsorbable microanchor (Mitek Microfix QuickAnchor Plus 1.3) to the lateral aspect of the condylar head. The anchor is linked with a 3.0 Ethibond absorbable suture to fix the posterolateral side of the disk above the condyle.The aims of this surgery were to alleviate temporomandibular pain, headaches, and neck pain and to restore good jaw mobility. In the long term, we achieved these objectives through restoration of the physiological position and function of the disk and the lower articular compartment.In our opinion, the bioabsorbable anchor is the best choice for this type of surgery because it ensures the stability of the restored disk position and leaves no artifacts in the long term that might impede follow-up with magnetic resonance imaging.",
"title": ""
},
{
"docid": "307d92caf4ff7e64db7d5f23035a7440",
"text": "In this paper, the effective use of flight-time constrained unmanned aerial vehicles (UAVs) as flying base stations that provide wireless service to ground users is investigated. In particular, a novel framework for optimizing the performance of such UAV-based wireless systems in terms of the average number of bits (data service) transmitted to users as well as the UAVs’ hover duration (i.e. flight time) is proposed. In the considered model, UAVs hover over a given geographical area to serve ground users that are distributed within the area based on an arbitrary spatial distribution function. In this case, two practical scenarios are considered. In the first scenario, based on the maximum possible hover times of UAVs, the average data service delivered to the users under a fair resource allocation scheme is maximized by finding the optimal cell partitions associated to the UAVs. Using the powerful mathematical framework of optimal transport theory, this cell partitioning problem is proved to be equivalent to a convex optimization problem. Subsequently, a gradient-based algorithm is proposed for optimally partitioning the geographical area based on the users’ distribution, hover times, and locations of the UAVs. In the second scenario, given the load requirements of ground users, the minimum average hover time that the UAVs need for completely servicing their ground users is derived. To this end, first, an optimal bandwidth allocation scheme for serving the users is proposed. Then, given this optimal bandwidth allocation, optimal cell partitions associated with the UAVs are derived by exploiting the optimal transport theory. Simulation results show that our proposed cell partitioning approach leads to a significantly higher fairness among the users compared with the classical weighted Voronoi diagram. Furthermore, the results demonstrate that the average hover time of the UAVs can be reduced by 64% by adopting the proposed optimal bandwidth allocation scheme as well as the optimal cell partitioning approach. In addition, our results reveal an inherent tradeoff between the hover time of UAVs and bandwidth efficiency while serving the ground users.",
"title": ""
},
{
"docid": "6bf2280158dca2d69501255d47322246",
"text": "Distal deletion of the long arm of chromosome 10 is associated with a dysmorphic craniofacial appearance, microcephaly, behavioral issues, developmental delay, intellectual disability, and ocular, urogenital, and limb abnormalities. Herein, we present clinical, molecular, and cytogenetic investigations of four patients, including two siblings, with nearly identical terminal deletions of 10q26.3, all of whom have an atypical presentation of this syndrome. Their prominent features include ataxia, mild-to-moderate intellectual disability, and hyperemia of the hands and feet, and they do not display many of the other features commonly associated with deletions of this region. These results point to a novel gene locus associated with ataxia and highlight the variability of the clinical presentation of patients with deletions of this region.",
"title": ""
},
{
"docid": "d611a165b088d7087415aa2c8843b619",
"text": "Type synthesis of 1-DOF remote center of motion (RCM) mechanisms is the preliminary for research on many multiDOF RCM mechanisms. Since types of existing RCM mechanisms are few, it is necessary to find an efficient way to create more new RCM mechanisms. In this paper, existing 1-DOF RCM mechanisms are first classified, then base on a proposed concept of the planar virtual center (VC) mechanism, which is a more generalized concept than a RCM mechanism, two approaches of type synthesis for 1-DOF RCM mechanisms are addressed. One case is that a 1-DOF parallel or serial–parallel RCM mechanism can be constructed by assembling two planar VC mechanisms; the other case, a VC mechanism can be expanded to a serial–parallel RCM mechanism. Concrete samples are provided accordingly, some of which are new types. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fe2870a3f36b9a042ec9cece5a64dafd",
"text": "This paper provides a methodology to study the PHY layer vulnerability of wireless protocols in hostile radio environments. Our approach is based on testing the vulnerabilities of a system by analyzing the individual subsystems. By targeting an individual subsystem or a combination of subsystems at a time, we can infer the weakest part and revise it to improve the overall system performance. We apply our methodology to 4G LTE downlink by considering each control channel as a subsystem. We also develop open-source software enabling research and education using software-defined radios. We present experimental results with open-source LTE systems and shows how the different subsystems behave under targeted interference. The analysis for the LTE downlink shows that the synchronization signals (PSS/SSS) are very resilient to interference, whereas the downlink pilots or Cell-Specific Reference signals (CRS) are the most susceptible to a synchronized protocol-aware interferer. We also analyze the severity of control channel attacks for different LTE configurations. Our methodology and tools allow rapid evaluation of the PHY layer reliability in harsh signaling environments, which is an asset to improve current standards and develop new and robust wireless protocols.",
"title": ""
},
{
"docid": "2d66994a185ee4d57c87ac8b012c86ac",
"text": "The majority of projects dealing with monitoring and diagnosis of Cyber Physical Systems (CPSs) relies on models created by human experts. But these models are rarely available, are hard to verify and to maintain and are often incomplete. Data-driven approaches are a promising alternative: They leverage on the large amount of data which is collected nowadays in CPSs, this data is then used to learn the necessary models automatically. For this, several challenges have to be tackled, such as real-time data acquisition and storage solutions, data analysis and machine learning algorithms, task specific human-machine-interfaces (HMI) and feedback/control mechanisms. In this paper, we propose a cognitive reference architecture which addresses these challenges. This reference architecture should both ease the reuse of algorithms and support scientific discussions by providing a comparison schema. Use cases from different industries are outlined and support the correctness of the architecture.",
"title": ""
},
{
"docid": "313b4f6832d45a428fe264cc16e6ff9f",
"text": "This theme issue provides a comprehensive collection of original research articles on the creation of diverse types of theranostic upconversion nanoparticles, their fundamental interactions in biology, as well as their biophotonic applications in noninvasive diagnostics and therapy.",
"title": ""
},
{
"docid": "ebc6b9c213fd20397aaabe1a15a36591",
"text": "In this paper, we propose an Arabic Question-Answering (Q-A) system called QASAL «Question -Answering system for Arabic Language». QASAL accepts as an input a natural language question written in Modern Standard Arabic (MSA) and generates as an output the most efficient and appropriate answer. The proposed system is composed of three modules: A question analysis module, a passage retrieval module and an answer extraction module. To process these three modules we use the NooJ Platform which represents a linguistic development environment.",
"title": ""
},
{
"docid": "96af91aed1c131f1c8c9d8076ed5835d",
"text": "Hedge funds are unique among investment vehicles in that they are relatively unconstrained in their use of derivative investments, short-selling, and leverage. This flexibility allows investment managers to span a broad spectrum of distinct risks, such as momentum and option-like investments. Taking a revealed preference approach, we find that Capital Asset Pricing Model (CAPM) alpha explains hedge fund flows better than alphas from more sophisticated models. This result suggests that investors pool together sophisticated model alpha with returns from exposures to traditional and exotic risks. We decompose performance into traditional and exotic risk components and find that while investors chase both components, they place greater relative emphasis on returns associated with exotic risk exposures that can only be obtained through hedge funds. However, we find little evidence of persistence in performance from traditional or exotic risks, which cautions against investors’ practice of seeking out risk exposures following periods of recent success.",
"title": ""
},
{
"docid": "9b49a4673456ab8e9f14a0fe5fb8bcc7",
"text": "Legged robots offer the potential to navigate a wide variety of terrains that are inaccessible to wheeled vehicles. In this paper we consider the planning and control tasks of navigating a quadruped robot over a wide variety of challenging terrain, including terrain which it has not seen until run-time. We present a software architecture that makes use of both static and dynamic gaits, as well as specialized dynamic maneuvers, to accomplish this task. Throughout the paper we highlight two themes that have been central to our approach: 1) the prevalent use of learning algorithms, and 2) a focus on rapid recovery and replanning techniques; we present several novel methods and algorithms that we developed for the quadruped and that illustrate these two themes. We evaluate the performance of these different methods, and also present and discuss the performance of our system on the official Learning Locomotion tests.",
"title": ""
},
{
"docid": "5f49c93d7007f0f14f1410ce7805b29a",
"text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.",
"title": ""
},
{
"docid": "fef4ab4bebb16560135cbf4d49c63b4d",
"text": "The two-fold aim of the paper is to unify and generalize on the one hand the double integrals of Beukers for ζ(2) and ζ(3), and of the second author for Euler’s constant γ and its alternating analog ln(4/π), and on the other hand the infinite products of the first author for e, of the second author for π, and of Ser for e . We obtain new double integral and infinite product representations of many classical constants, as well as a generalization to Lerch’s transcendent of Hadjicostas’s double integral formula for the Riemann zeta function, and logarithmic series for the digamma and Euler beta functions. The main tools are analytic continuations of Lerch’s function, including Hasse’s series. We also use Ramanujan’s polylogarithm formula for the sum of a particular series involving harmonic numbers, and his relations between certain dilogarithm values.",
"title": ""
}
] |
scidocsrr
|
42fbc64c151714a87558e63ee70bdfea
|
Deep Deterministic Policy Gradient for Urban Traffic Light Control
|
[
{
"docid": "05a4ec72afcf9b724979802b22091fd4",
"text": "Convolutional neural networks (CNNs) have greatly improved state-of-the-art performances in a number of fields, notably computer vision and natural language processing. In this work, we are interested in generalizing the formulation of CNNs from low-dimensional regular Euclidean domains, where images (2D), videos (3D) and audios (1D) are represented, to high-dimensional irregular domains such as social networks or biological networks represented by graphs. This paper introduces a formulation of CNNs on graphs in the context of spectral graph theory. We borrow the fundamental tools from the emerging field of signal processing on graphs, which provides the necessary mathematical background and efficient numerical schemes to design localized graph filters efficient to learn and evaluate. As a matter of fact, we introduce the first technique that offers the same computational complexity than standard CNNs, while being universal to any graph structure. Numerical experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs, as long as the graph is well-constructed.",
"title": ""
}
] |
[
{
"docid": "aada9722cb54130151657a84417d14a1",
"text": "Classical theories of sensory processing view the brain as a passive, stimulus-driven device. By contrast, more recent approaches emphasize the constructive nature of perception, viewing it as an active and highly selective process. Indeed, there is ample evidence that the processing of stimuli is controlled by top–down influences that strongly shape the intrinsic dynamics of thalamocortical networks and constantly create predictions about forthcoming sensory events. We discuss recent experiments indicating that such predictions might be embodied in the temporal structure of both stimulus-evoked and ongoing activity, and that synchronous oscillations are particularly important in this process. Coherence among subthreshold membrane potential fluctuations could be exploited to express selective functional relationships during states of expectancy or attention, and these dynamic patterns could allow the grouping and selection of distributed neuronal responses for further processing.",
"title": ""
},
{
"docid": "1572891f4c2ab064c6d6a164f546e7c1",
"text": "BACKGROUND Unexplained gastrointestinal (GI) symptoms and joint hypermobility (JHM) are common in the general population, the latter described as benign joint hypermobility syndrome (BJHS) when associated with musculo-skeletal symptoms. Despite overlapping clinical features, the prevalence of JHM or BJHS in patients with functional gastrointestinal disorders has not been examined. METHODS The incidence of JHM was evaluated in 129 new unselected tertiary referrals (97 female, age range 16-78 years) to a neurogastroenterology clinic using a validated 5-point questionnaire. A rheumatologist further evaluated 25 patients with JHM to determine the presence of BJHS. Groups with or without JHM were compared for presentation, symptoms and outcomes of relevant functional GI tests. KEY RESULTS Sixty-three (49%) patients had evidence of generalized JHM. An unknown aetiology for GI symptoms was significantly more frequent in patients with JHM than in those without (P < 0.0001). The rheumatologist confirmed the clinical impression of JHM in 23 of 25 patients, 17 (68%) of whom were diagnosed with BJHS. Patients with co-existent BJHS and GI symptoms experienced abdominal pain (81%), bloating (57%), nausea (57%), reflux symptoms (48%), vomiting (43%), constipation (38%) and diarrhoea (14%). Twelve of 17 patients presenting with upper GI symptoms had delayed gastric emptying. One case is described in detail. CONCLUSIONS & INFERENCES In a preliminary retrospective study, we have found a high incidence of JHM in patients referred to tertiary neurogastroenterology care with unexplained GI symptoms and in a proportion of these a diagnosis of BJHS is made. Symptoms and functional tests suggest GI dysmotility in a number of these patients. The possibility that a proportion of patients with unexplained GI symptoms and JHM may share a common pathophysiological disorder of connective tissue warrants further investigation.",
"title": ""
},
{
"docid": "c6d3f20e9d535faab83fb34cec0fdb5b",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "0ee744ad3c75f7bb9695c47165d87043",
"text": "Clustering is a critical component of many data analysis tasks, but is exceedingly difficult to fully automate. To better incorporate domain knowledge, researchers in machine learning, human-computer interaction, visualization, and statistics have independently introduced various computational tools to engage users through interactive clustering. In this work-in-progress paper, we present a cross-disciplinary literature survey, and find that existing techniques often do not meet the needs of real-world data analysis. Semi-supervised machine learning algorithms often impose prohibitive user interaction costs or fail to account for external analysis requirements. Human-centered approaches and user interface designs often fall short because of their insufficient statistical modeling capabilities. Drawing on effective approaches from each field, we identify five characteristics necessary to support effective human-in-the-loop interactive clustering: iterative, multi-objective, local updates that can operate on any initial clustering and a dynamic set of features. We outline key aspects of our technique currently under development, and share our initial evidence suggesting that all five design considerations can be incorporated into a single algorithm. We plan to demonstrate our technique on three data analysis tasks: feature engineering for classification, exploratory analysis of biomedical data, and multi-document summarization.",
"title": ""
},
{
"docid": "4c102cb77b3992f6cb29a117994804eb",
"text": "These current studies explored the impact of individual differences in personality factors on interface interaction and learning performance behaviors in both an interactive visualization and a menu-driven web table in two studies. Participants were administered 3 psychometric measures designed to assess Locus of Control, Extraversion, and Neuroticism. Participants were then asked to complete multiple procedural learning tasks in each interface. Results demonstrated that all three measures predicted completion times. Additionally, results analyses demonstrated personality factors also predicted the number of insights participants reported while completing the tasks in each interface. We discuss how these findings advance our ongoing research in the Personal Equation of Interaction.",
"title": ""
},
{
"docid": "435200b067ebd77f69a04cc490d73fa6",
"text": "Self-mutilation of genitalia is an extremely rare entity, usually found in psychotic patients. Klingsor syndrome is a condition in which such an act is based upon religious delusions. The extent of genital mutilation can vary from superficial cuts to partial or total amputation of penis to total emasculation. The management of these patients is challenging. The aim of the treatment is restoration of the genital functionality. Microvascular reanastomosis of the phallus is ideal but it is often not possible due to the delay in seeking medical attention, non viability of the excised phallus or lack of surgical expertise. Hence, it is not unusual for these patients to end up with complete loss of the phallus and a perineal urethrostomy. We describe a patient with Klingsor syndrome who presented to us with near total penile amputation. The excised phallus was not viable and could not be used. The patient was managed with surgical reconstruction of the penile stump which was covered with loco-regional flaps. The case highlights that a functional penile reconstruction is possible in such patients even when microvascular reanastomosis is not feasible. This technique should be attempted before embarking upon perineal urethrostomy.",
"title": ""
},
{
"docid": "03a6425423516d0f978bb5f8abe0d62d",
"text": "Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence/robotics communities. We will argue that the attempts to allow machines to make ethical decisions or to have rights are misguided. Instead we propose a new science of safety engineering for intelligent artificial agents. In particular we issue a challenge to the scientific community to develop intelligent systems capable of proving that they are in fact safe even under recursive selfimprovement.",
"title": ""
},
{
"docid": "6d471fcfa68cfb474f2792892e197a66",
"text": "The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.",
"title": ""
},
{
"docid": "5956e9399cfe817aa1ddec5553883bef",
"text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.",
"title": ""
},
{
"docid": "bdf16e241e4d33af64b7dd5a97873a2c",
"text": "Although lentils (Lens culinaris L) contain several bioactive compounds that have been linked to the prevention of cancer, the in vivo chemopreventive ability of lentils against chemically induced colorectal cancer has not been examined. Our present study examined the hypothesis that lentils could suppress the early carcinogenesis in vivo by virtue of their bioactive micro- and macroconstituents and that culinary thermal treatment could affect their chemopreventive potential. To accomplish this goal, we used raw whole lentils (RWL), raw split lentils (RSL), cooked whole lentils (CWL), and cooked split lentils (CSL). Raw soybeans (RSB; Glycine max) were used for the purpose of comparison with a well-studied chemopreventive agent. Sixty weanling Fischer 344 male rats, 4 to 5 weeks of age, were randomly assigned to 6 groups (10 rats/group): the control group (C) received AIN-93G diet, and treatment leguminous groups of RWL, CWL, RSL, CSL, and RSB received the treatment diets containing AIN-93G+5% of the above-mentioned legumes. After acclimatization for 1 week (at 5th to 6th week of age), all animals were put on the control and treatment diets separately for 5 weeks (from 6th to 11th week of age). At the end of the 5th week of feeding (end of 11th week of age), all rats received 2 subcutaneous injections of azoxymethane carcinogen at 15 mg/kg rat body weight per dose once a week for 2 consecutive weeks. After 17 weeks of the last azoxymethane injection (from 12th to 29th week of age), all rats were euthanized. Chemopreventive ability was assessed using colonic aberrant crypt foci and activity of hepatic glutathione-S-transferases. Significant reductions (P < .05) were found in total aberrant crypt foci number (mean +/- SEM) for RSB (27.33 +/- 4.32), CWL (33.44 +/- 4.56), and RSL (37.00 +/- 6.02) in comparison with the C group (58.33 +/- 8.46). Hepatic glutathione-S-transferases activities increased significantly (P < .05) in rats fed all treatment diets (from 51.38 +/- 3.66 to 67.94 +/- 2.01 micromol mg(-1) min(-1)) when compared with control (C) diet (26.13 +/- 1.01 micromol mg(-1) min(-1)). Our findings indicate that consumption of lentils might be protective against colon carcinogenesis and that hydrothermal treatment resulted in an improvement in the chemopreventive potential for the whole lentils.",
"title": ""
},
{
"docid": "dd51d7253e6e249980e4f1f945f93c84",
"text": "In real-time strategy games like StarCraft, skilled players often block the entrance to their base with buildings to prevent the opponent’s units from getting inside. This technique, called “walling-in”, is a vital part of player’s skill set, allowing him to survive early aggression. However, current artificial players (bots) do not possess this skill, due to numerous inconveniences surfacing during its implementation in imperative languages like C++ or Java. In this text, written as a guide for bot programmers, we address the problem of finding an appropriate building placement that would block the entrance to player’s base, and present a ready to use declarative solution employing the paradigm of answer set programming (ASP). We also encourage the readers to experiment with different declarative approaches to this problem.",
"title": ""
},
{
"docid": "78a0898f35113547cdc3adb567ad7afb",
"text": "Phishing is a form of online identity theft. Phishers use social engineering to steal victims' personal identity data and financial account credentials. Social engineering schemes use spoofed e-mails to lure unsuspecting victims into counterfeit websites designed to trick recipients into divulging financial data such as credit card numbers, account usernames, passwords and social security numbers. This is called a deceptive phishing attack. In this paper, a thorough overview of a deceptive phishing attack and its countermeasure techniques, which is called anti-phishing, is presented. Firstly, technologies used by phishers and the definition, classification and future works of deceptive phishing attacks are discussed. Following with the existing anti-phishing techniques in literatures and research-stage technologies are shown, and a thorough analysis which includes the advantages and shortcomings of countermeasures is given. At last, we show the research of why people fall for phishing attack.",
"title": ""
},
{
"docid": "cce5d75bfcfc22f7af08f6b0b599d472",
"text": "In order to determine if exposure to carcinogens in fire smoke increases the risk of cancer, we examined the incidence of cancer in a cohort of 2,447 male firefighters in Seattle and Tacoma, (Washington, USA). The study population was followed for 16 years (1974–89) and the incidence of cancer, ascertained using a population-based tumor registry, was compared with local rates and with the incidence among 1,878 policemen from the same cities. The risk of cancer among firefighters was found to be similar to both the police and the general male population for most common sites. An elevated risk of prostate cancer was observed relative to the general population (standardized incidence ratio [SIR]=1.4, 95 percent confidence interval [CI]=1.1–1.7) but was less elevated compared with rates in policement (incidence density ratio [IDR]=1.1, CI=0.7–1.8) and was not related to duration of exposure. The risk of colon cancer, although only slightly elevated relative to the general population (SIR=1.1, CI=0.7–1.6) and the police (IDR=1.3, CI=0.6–3.0), appeared to increase with duration of employment. Although the relationship between firefighting and colon cancer is consistent with some previous studies, it is based on small numbers and may be due to chance. While this study did not find strong evidence for an excess risk of cancer, the presence of carcinogens in the firefighting environment warrants periodic re-evaluation of cancer incidence in this population and the continued use of protective equipment.",
"title": ""
},
{
"docid": "b3840c076852c5bc9a2f50e1a1938780",
"text": "The rapid progress in medical and technical innovations in the neonatal intensive care unit (NICU) has been accompanied by concern for outcomes of NICU graduates. Although advances in neonatal care have led to significant changes in survival rates of very small and extremely preterm neonates, early feeding difficulties with the transition from tube feeding to oral feeding are prominent and often persist beyond discharge to home. Progress in learning to feed in the NICU and continued growth in feeding skills after the NICU may be closely tied to fostering neuroprotection and safety. The experience of learning to feed in the NICU may predispose preterm neonates to feeding problems that persist. Neonatal feeding as an area of specialized clinical practice has grown considerably in the last decade. This article is the first in a two-part series devoted to neonatal feeding. Part 1 explores factors in NICU feeding experiences that may serve to constrain or promote feeding skill development, not only in the NICU but long after discharge to home. Part II describes approaches to intervention that support neuroprotection and safety.",
"title": ""
},
{
"docid": "80b86f424d8f99a28f0bd4d16a89fe3d",
"text": "Programming is traditionally taught using a bottom-up approach, where details of syntax and implementation of data structures are the predominant concepts. The top-down approach proposed focuses instead on understanding the abstractions represented by the classical data structures without regard to their physical implementation. Only after the students are comfortable with the behavior and applications of the major data structures do they learn about their implementations or the basic data types like arrays and pointers that are used. This paper discusses the benefits of such an approach and how it is being used in a Computer Science curriculum.",
"title": ""
},
{
"docid": "430d74071a8b399675d10d43b3b337ac",
"text": "Machine learning systems can often achieve high performance on a test set by relying on heuristics that are effective for frequent example types but break down in more challenging cases. We study this issue within natural language inference (NLI), the task of determining whether one sentence entails another. Based on an analysis of the task, we hypothesize three fallible syntactic heuristics that NLI models are likely to adopt: the lexical overlap heuristic, the subsequence heuristic, and the constituent heuristic. To determine whether models have adopted these heuristics, we introduce a controlled evaluation set called HANS (Heuristic Analysis for NLI Systems), which contains many examples where the heuristics fail. We find that models trained on MNLI, including the state-of-the-art model BERT, perform very poorly on HANS, suggesting that they have indeed adopted these heuristics. We conclude that there is substantial room for improvement in NLI systems, and that the HANS dataset can motivate and measure progress in this area.",
"title": ""
},
{
"docid": "dfc44cd25a729035e93dbd1a04806510",
"text": "Recommender systems are firmly established as a standard technology for assisting users with their choices; however, little attention has been paid to the application of the user model in recommender systems, particularly the variability and noise that are an intrinsic part of human behavior and activity. To enable recommender systems to suggest items that are useful to a particular user, it can be essential to understand the user and his or her interactions with the system. These interactions typically manifest themselves as explicit and implicit user feedback that provides the key indicators for modeling users’ preferences for items and essential information for personalizing recommendations. In this article, we propose a classification framework for the use of explicit and implicit user feedback in recommender systems based on a set of distinct properties that include Cognitive Effort, User Model, Scale of Measurement, and Domain Relevance. We develop a set of comparison criteria for explicit and implicit user feedback to emphasize the key properties. Using our framework, we provide a classification of recommender systems that have addressed questions about user feedback, and we review state-of-the-art techniques to improve such user feedback and thereby improve the performance of the recommender system. Finally, we formulate challenges for future research on improvement of user feedback.",
"title": ""
},
{
"docid": "e9006af64364e6dcd1ea4684642539de",
"text": "Since the publication of the PDP volumes in 1986,1 learning by backpropagation has become the most popular method of training neural networks. The reason for the popularity is the underlying simplicity and relative power of the algorithm. Its power derives from the fact that, unlike its precursors, the perceptron learning rule and the Widrow-Hoff learning rule, it can be employed for training nonlinear networks of arbitrary connectivity. Since such networks are often required for real-world applications, such a learning procedure is critical. Nearly as important as its power in explaining its popularity is its simplicity. The basic igea is old and simple; namely define an error function and use hill climbing (or gradient descent if you prefer going downhill) to find a set of weights which optimize performance on a particular task. The algorithm is so simple that it can be implemented in a few lines' of code, and there have been no doubt many thousands of implementations of the algorithm by now. The name back propagation actually comes from the term employed by Rosenblatt (1962) for his attempt to generalize the perceptron learning algorithm to the multilayer case. There were many attempts to generalize the perceptron learning procedure to multiple layers during the 1960s and 1970s, but none of them were especially successful. There appear to have been at least three independent inventions of the modem version of the back-propagation algorithm: Paul Werbos developed the basic idea in 1974 in a Ph.D. dissertation entitled",
"title": ""
},
{
"docid": "d1c4e0da79ceb8893f63aa8ea7c8041c",
"text": "This paper describes the GOLD (Generic Obstacle and Lane Detection) system, a stereo vision-based hardware and software architecture developed to increment road safety of moving vehicles: it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings). It has been implemented on the PAPRICA system and works at a rate of 10 Hz.",
"title": ""
},
{
"docid": "19339fa01942ad3bf33270aa1f6ceae2",
"text": "This study investigated query formulations by users with {\\it Cognitive Search Intents} (CSIs), which are users' needs for the cognitive characteristics of documents to be retrieved, {\\em e.g. comprehensibility, subjectivity, and concreteness. Our four main contributions are summarized as follows (i) we proposed an example-based method of specifying search intents to observe query formulations by users without biasing them by presenting a verbalized task description;(ii) we conducted a questionnaire-based user study and found that about half our subjects did not input any keywords representing CSIs, even though they were conscious of CSIs;(iii) our user study also revealed that over 50\\% of subjects occasionally had experiences with searches with CSIs while our evaluations demonstrated that the performance of a current Web search engine was much lower when we not only considered users' topical search intents but also CSIs; and (iv) we demonstrated that a machine-learning-based query expansion could improve the performances for some types of CSIs.Our findings suggest users over-adapt to current Web search engines,and create opportunities to estimate CSIs with non-verbal user input.",
"title": ""
}
] |
scidocsrr
|
5f8cd134cbf9965c9a961a4bebcc312d
|
An Agile Approach to Building RISC-V Microprocessors
|
[
{
"docid": "35f4a8131a27298b1aa04859450e6620",
"text": "Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic–photonic systems enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic–photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic–photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.",
"title": ""
}
] |
[
{
"docid": "cf751df3c52306a106fcd00eef28b1a4",
"text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.",
"title": ""
},
{
"docid": "dae877409dca88fc6fed5cf6536e65ad",
"text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.",
"title": ""
},
{
"docid": "25cd669a4fcf62ff56669bff22974634",
"text": "In this paper, we introduce a novel framework for combining scientific knowledge within physicsbased models and recurrent neural networks to advance scientific discovery in many dynamical systems. We will first describe the use of outputs from physics-based models in learning a hybrid-physics-data model. Then, we further incorporate physical knowledge in real-world dynamical systems as additional constraints for training recurrent neural networks. We will apply this approach on modeling lake temperature and quality where we take into account the physical constraints along both the depth dimension and time dimension. By using scientific knowledge to guide the construction and learning the data-driven model, we demonstrate that this method can achieve better prediction accuracy as well as scientific consistency of results.",
"title": ""
},
{
"docid": "1feaf48291b7ea83d173b70c23a3b7c0",
"text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).",
"title": ""
},
{
"docid": "4f64b2b2b50de044c671e3d0d434f466",
"text": "Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many methodological concepts have been introduced and have progressively improved performances , while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior. Motion analysis is one of the main tasks of computer vision. From an applicative viewpoint, the information brought by the dynamical behavior of observed objects or by the movement of the camera itself is a decisive element for the interpretation of observed phenomena. The motion characterizations can be extremely variable among the large number of application domains. Indeed, one can be interested in tracking objects, quantifying deformations, retrieving dominant motion, detecting abnormal behaviors, and so on. The most low-level characterization is the estimation of a dense motion field, corresponding to the displacement of each pixel, which is called optical flow. Most high-level motion analysis tasks employ optical flow as a fundamental basis upon which more semantic interpretation is built. Optical flow estimation has given rise to a tremendous quantity of works for 35 years. If a certain continuity can be found since the seminal works of [120,170], a number of methodological innovations have progressively changed the field and improved performances. Evaluation benchmarks and applicative domains have followed this progress by proposing new challenges allowing methods to face more and more difficult situations in terms of motion discontinuities, large displacements, illumination changes or computational costs. Despite great advances, handling these issues in a unique method still remains an open problem. Comprehensive surveys of optical flow literature were carried out in the nineties [21,178,228]. More recently, reviewing works have focused on variational approaches [264], benchmark results [13], specific applications [115], or tutorials restricted to a certain subset of methods [177,260]. However, covering all the main estimation approaches and including recent developments in a comprehensive classification is still lacking in the optical flow field. This survey …",
"title": ""
},
{
"docid": "77d73cf3aa583e12cc102f48be184100",
"text": "The combinatorial cross-regulation of hundreds of sequence-specific transcription factors (TFs) defines a regulatory network that underlies cellular identity and function. Here we use genome-wide maps of in vivo DNaseI footprints to assemble an extensive core human regulatory network comprising connections among 475 sequence-specific TFs and to analyze the dynamics of these connections across 41 diverse cell and tissue types. We find that human TF networks are highly cell selective and are driven by cohorts of factors that include regulators with previously unrecognized roles in control of cellular identity. Moreover, we identify many widely expressed factors that impact transcriptional regulatory networks in a cell-selective manner. Strikingly, in spite of their inherent diversity, all cell-type regulatory networks independently converge on a common architecture that closely resembles the topology of living neuronal networks. Together, our results provide an extensive description of the circuitry, dynamics, and organizing principles of the human TF regulatory network.",
"title": ""
},
{
"docid": "427ebc0500e91e842873c4690cdacf79",
"text": "Bounding volume hierarchy (BVH) has been widely adopted as the acceleration structure in broad-phase collision detection. Previous state-of-the-art BVH-based collision detection approaches exploited the spatio-temporal coherence of simulations by maintaining a bounding volume test tree (BVTT) front. A major drawback of these algorithms is that large deformations in the scenes decrease culling efficiency and slow down collision queries. Moreover, for front-based methods, the inefficient caching on GPU caused by the arbitrary layout of BVH and BVTT front nodes becomes a critical performance issue. We present a fast and robust BVH-based collision detection scheme on GPU that addresses the above problems by ordering and restructuring BVHs and BVTT fronts. Our techniques are based on the use of histogram sort and an auxiliary structure BVTT front log, through which we analyze the dynamic status of BVTT front and BVH quality. Our approach efficiently handles interand intra-object collisions and performs especially well in simulations where there is considerable spatio-temporal coherence. The benchmark results demonstrate that our approach is significantly faster than the previous BVH-based method, and also outperforms other state-of-the-art spatial subdivision schemes in terms of speed. CCS Concepts •Computing methodologies → Collision detection; Physical simulation;",
"title": ""
},
{
"docid": "eff7d3775d12687c81ae91b130c7c562",
"text": "We propose a novel approach for sparse probabilistic principal component analysis, that combines a low rank representation for the latent factors and loadings with a novel sparse variational inference approach for estimating distributions of latent variables subject to sparse support constraints. Inference and parameter estimation for the resulting model is achieved via expectation maximization with a novel variational inference method for the E-step that induces sparsity. We show that this inference problem can be reduced to discrete optimal support selection. The discrete optimization is submodular, hence, greedy selection is guaranteed to achieve 1-1/e fraction of the optimal. Empirical studies indicate effectiveness of the proposed approach for the recovery of a parsimonious decomposition as compared to established baseline methods. We also evaluate our method against state-of-the-art methods on high dimensional fMRI data, and show that the method performs as well as or better than other methods.",
"title": ""
},
{
"docid": "2855a1f420ed782317c1598c9d9c185e",
"text": "Ranking authors is vital for identifying a researcher’s impact and his standing within a scientific field. There are many different ranking methods (e.g., citations, publications, h-index, PageRank, and weighted PageRank), but most of them are topic-independent. This paper proposes topic-dependent ranks based on the combination of a topic model and a weighted PageRank algorithm. The Author-Conference-Topic (ACT) model was used to extract topic distribution of individual authors. Two ways for combining the ACT model with the PageRank algorithm are proposed: simple combination (I_PR) or using a topic distribution as a weighted vector for PageRank (PR_t). Information retrieval was chosen as the test field and representative authors for different topics at different time phases were identified. Principal Component Analysis (PCA) was applied to analyze the ranking difference between I_PR and PR_t.",
"title": ""
},
{
"docid": "091eedcd69373f99419a745f2215e345",
"text": "Society is increasingly reliant upon complex and interconnected cyber systems to conduct daily life activities. From personal finance to managing defense capabilities to controlling a vast web of aircraft traffic, digitized information systems and software packages have become integrated at virtually all levels of individual and collective activity. While such integration has been met with immense increases in efficiency of service delivery, it has also been subject to a diverse body of threats from nefarious hackers, groups, and even state government bodies. Such cyber threats have shifted over time to affect various cyber functionalities, such as with Direct Denial of Service (DDoS), data theft, changes to data code, infection via computer virus, and many others.",
"title": ""
},
{
"docid": "8acfcaaa00cbfe275f6809fdaa3c6a78",
"text": "Internet usage has drastically shifted from host-centric end-to-end communication to receiver-driven content retrieval. In order to adapt to this change, a handful of innovative information/content centric networking (ICN) architectures have recently been proposed. One common and important feature of these architectures is to leverage built-in network caches to improve the transmission efficiency of content dissemination. Compared with traditional Web Caching and CDN Caching, ICN Cache takes on several new characteristics: cache is transparent to applications, cache is ubiquitous, and content to be cached is more ine-grained. These distinguished features pose new challenges to ICN caching technologies. This paper presents a comprehensive survey of state-of-art techniques aiming to address these issues, with particular focus on reducing cache redundancy and improving the availability of cached content. As a new research area, this paper also points out several interesting yet challenging research directions in this subject. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "33bb646417d0ebbe01747b97323df5d0",
"text": "Semantic search or text-to-video search in video is a novel and challenging problem in information and multimedia retrieval. Existing solutions are mainly limited to text-to-text matching, in which the query words are matched against the user-generated metadata. This kind of text-to-text search, though simple, is of limited functionality as it provides no understanding about the video content. This paper presents a state-of-the-art system for event search without any user-generated metadata or example videos, known as text-to-video search. The system relies on substantial video content understanding and allows for searching complex events over a large collection of videos. The proposed text-to-video search can be used to augment the existing text-to-text search for video. The novelty and practicality are demonstrated by the evaluation in NIST TRECVID 2014, where the proposed system achieves the best performance. We share our observations and lessons in building such a state-of-the-art system, which may be instrumental in guiding the design of the future system for video search and analysis.",
"title": ""
},
{
"docid": "ce3f09b04cc8a5445e009d65169f1ad1",
"text": "Current methods in treating chronic wounds have had limited success in large part due to the open loop nature of the treatment. We have created a localized 3D-printed smart wound dressing platform that will allow for real-time data acquisition of oxygen concentration, which is an important indicator of wound healing. This will serve as the first leg of a feedback loop for a fully optimized treatment mechanism tailored to the individual patient. A flexible oxygen sensor was designed and fabricated with high sensitivity and linear current output. With a series of off-the-shelf electronic components including a programmable-gain analog front-end, a microcontroller and wireless radio, an integrated electronic system with data readout and wireless transmission capabilities was assembled in a compact package. Using an elastomeric material, a bandage with exceptional flexibility and tensile strength was 3D-printed. The bandage contains cavities for both the oxygen sensor and the electronic systems, with contacts interfacing the two systems. Our integrated, flexible platform is the first step toward providing a self-operating, highly optimized remote therapy for chronic wounds.",
"title": ""
},
{
"docid": "135b9476e787624b899686664b03e6a1",
"text": "Amyotrophic lateral sclerosis (ALS) is the most common neurodegenerative disease of the motor system. Bulbar symptoms such as dysphagia and dysarthria are frequent features of ALS and can result in reductions in life expectancy and quality of life. These dysfunctions are assessed by clinical examination and by use of instrumented methods such as fiberendoscopic evaluation of swallowing and videofluoroscopy. Laryngospasm, another well-known complication of ALS, commonly comes to light during intubation and extubation procedures in patients undergoing surgery. Laryngeal and pharyngeal complications are treated by use of an array of measures, including body positioning, compensatory techniques, voice and breathing exercises, communication devices, dietary modifications, various safety strategies, and neuropsychological assistance. Meticulous monitoring of clinical symptoms and close cooperation within a multidisciplinary team (physicians, speech and language therapists, occupational therapists, dietitians, caregivers, the patients and their relatives) are vital.",
"title": ""
},
{
"docid": "cd56f2a6a5187476c8e63370a14c0dd0",
"text": "This complex infection has a number of objective manifestations, including a characteristic skin lesion called erythema migrans (the most common presentation of early Lyme disease), certain neurologic and cardiac manifestations, and pauciarticular arthritis (the most common presentation of late Lyme disease), all of which usually respond well to conventional antibiotic therapy. Despite resolution of the objective manifestations of infection after antibiotic treatment, a minority of patients have fatigue, musculoskeletal pain, difficulties with concentration or short-term memory, or all of these symptoms. In this article, we refer to these usually mild and self-limiting subjective symptoms as “post–Lyme disease symptoms,” and if they last longer than 6 months, we call them “post–Lyme disease syndrome.”",
"title": ""
},
{
"docid": "99d57cef03e21531be9f9663ec023987",
"text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: [email protected] Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.",
"title": ""
},
{
"docid": "9ff912ad71c84cfba286f1be7bd8d4b3",
"text": "This article compares traditional industrial-organizational psychology (I-O) research published in Journal of Applied Psychology (JAP) with organizational behavior management (OBM) research published in Journal of Organizational Behavior Management (JOBM). The purpose of this comparison was to identify similarities and differences with respect to research topics and methodologies, and to offer suggestions for what OBM researchers and practitioners can learn from I-O. Articles published in JAP from 1987-1997 were reviewed and compared to articles published during the same decade in JOBM (Nolan, Jarema, & Austin, 1999). This comparison includes Barbara R. Bucklin, Alicia M. Alvero, Alyce M. Dickinson, John Austin, and Austin K. Jackson are affiliated with Western Michigan University. Address correspondence to Alyce M. Dickinson, Department of Psychology, Western Michigan University, Kalamazoo, MI 49008-5052 (E-mail: alyce.dickinson@ wmich.edu.) Journal of Organizational Behavior Management, Vol. 20(2) 2000 E 2000 by The Haworth Press, Inc. All rights reserved. 27 D ow nl oa de d by [ W es te rn M ic hi ga n U ni ve rs ity ] at 1 1: 14 0 3 Se pt em be r 20 12 JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 28 (a) author characteristics, (b) authors published in both journals, (c) topics addressed, (d) type of article, and (e) research characteristics and methodologies. Among the conclusions are: (a) the primary relative strength of OBM is its practical significance, demonstrated by the proportion of research addressing applied issues; (b) the greatest strength of traditional I-O appears to be the variety and complexity of organizational research topics; and (c) each field could benefit from contact with research published in the other. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-342-9678. E-mail address: <[email protected]> Website: <http://www.HaworthPress.com>]",
"title": ""
},
{
"docid": "2648ec04733bbe56c1740e574c2a08e8",
"text": "Most work on tweet sentiment analysis is mono-lingual and the models that are generated by machine learning strategies do not generalize across multiple languages. Cross-language sentiment analysis is usually performed through machine translation approaches that translate a given source language into the target language of choice. Machine translation is expensive and the results that are provided by theses strategies are limited by the quality of the translation that is performed. In this paper, we propose a language-agnostic translation-free method for Twitter sentiment analysis, which makes use of deep convolutional neural networks with character-level embeddings for pointing to the proper polarity of tweets that may be written in distinct (or multiple) languages. The proposed method is more accurate than several other deep neural architectures while requiring substantially less learnable parameters. The resulting model is capable of learning latent features from all languages that are employed during the training process in a straightforward fashion and it does not require any translation process to be performed whatsoever. We empirically evaluate the efficiency and effectiveness of the proposed approach in tweet corpora based on tweets from four different languages, showing that our approach comfortably outperforms the baselines. Moreover, we visualize the knowledge that is learned by our method to qualitatively validate its effectiveness for tweet sentiment classification.",
"title": ""
},
{
"docid": "7ec93b17c88d09f8a442dd32127671d8",
"text": "Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals. This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.",
"title": ""
},
{
"docid": "7297a6317a3fc515d2d46943a2792c69",
"text": "The present work elaborates the process design methodology for the evaluation of the distillation systems based on the economic, exergetic and environmental point of view, the greenhouse gas (GHG) emissions. The methodology proposes the Heat Integrated Pressure Swing Distillation Sequence (HiPSDS) is economic and reduces the GHG emissions than the conventional Extractive Distillation Sequence (EDS) and the Pressure Swing Distillation Sequence (PSDS) for the case study of isobutyl alcohol and isobutyl acetate with the solvents for EDS and with low pressure variations for PSDS and HiPSDS. The study demonstrates that the exergy analysis can predict the results of the economic and environmental evaluation associated with the process design.",
"title": ""
}
] |
scidocsrr
|
021a235d989467e03d929077557323b1
|
An Efficient Data Fingerprint Query Algorithm Based on Two-Leveled Bloom Filter
|
[
{
"docid": "7add673c4f72e6a7586109ac3bdab2ec",
"text": "Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this article, we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.",
"title": ""
},
{
"docid": "26b415f796b85dea5e63db9c58b6c790",
"text": "A predominant portion of Internet services, like content delivery networks, news broadcasting, blogs sharing and social networks, etc., is data centric. A significant amount of new data is generated by these services each day. To efficiently store and maintain backups for such data is a challenging task for current data storage systems. Chunking based deduplication (dedup) methods are widely used to eliminate redundant data and hence reduce the required total storage space. In this paper, we propose a novel Frequency Based Chunking (FBC) algorithm. Unlike the most popular Content-Defined Chunking (CDC) algorithm which divides the data stream randomly according to the content, FBC explicitly utilizes the chunk frequency information in the data stream to enhance the data deduplication gain especially when the metadata overhead is taken into consideration. The FBC algorithm consists of two components, a statistical chunk frequency estimation algorithm for identifying the globally appeared frequent chunks, and a two-stage chunking algorithm which uses these chunk frequencies to obtain a better chunking result. To evaluate the effectiveness of the proposed FBC algorithm, we conducted extensive experiments on heterogeneous datasets. In all experiments, the FBC algorithm persistently outperforms the CDC algorithm in terms of achieving a better dedup gain or producing much less number of chunks. Particularly, our experiments show that FBC produces 2.5 ~ 4 times less number of chunks than that of a baseline CDC which achieving the same Duplicate Elimination Ratio (DER). Another benefit of FBC over CDC is that the FBC with average chunk size greater than or equal to that of CDC achieves up to 50% higher DER than that of a CDC algorithm.",
"title": ""
},
{
"docid": "4bf6c59cdd91d60cf6802ae99d84c700",
"text": "This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block’s contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems. We have built a prototype of the system and present some preliminary performance results. The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data. The feasibility of the write-once model for storage is demonstrated using data from over a decade’s use of two Plan 9 file systems.",
"title": ""
}
] |
[
{
"docid": "b69a39dd203eb6d2a27dae650ef7e6cb",
"text": "In this paper, a high-power high-efficiency wireless-power-transfer system using the class-E operation for transmitter via inductive coupling has been designed and fabricated using the proposed design approach. The system requires no complex external control system but relies on its natural impedance response to achieve the desired power-delivery profile across a wide range of load resistances while maintaining high efficiency to prevent any heating issues. The proposed system consists of multichannels with independent gate drive to control power delivery. The fabricated system is compact and capable of 295 W of power delivery at 75.7% efficiency with forced air cooling and of 69 W of power delivery at 74.2% efficiency with convection cooling. This is the highest power and efficiency of a loosely coupled planar wireless-power-transfer system reported to date.",
"title": ""
},
{
"docid": "b0840d44b7ec95922eeed4ef71b338f9",
"text": "Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph construction process, a graph simplification process, and postprocessing filtering. Here we discuss them as a framework of four stages for data analysis and processing and survey variety of techniques, algorithms, and software tools used during each stage. We also discuss the challenges that face current assemblers in the next-generation environment to determine the current state-of-the-art. We recommend a layered architecture approach for constructing a general assembler that can handle the sequences generated by different sequencing platforms.",
"title": ""
},
{
"docid": "d2a89459ca4a0e003956d6fe4871bb34",
"text": "In this paper, a high-efficiency high power density LLC resonant converter with a matrix transformer is proposed. A matrix transformer can help reduce leakage inductance and the ac resistance of windings so that the flux cancellation method can then be utilized to reduce core size and loss. Synchronous rectifier (SR) devices and output capacitors are integrated into the secondary windings to eliminate termination-related winding losses, via loss and reduce leakage inductance. A 1 MHz 390 V/12 V 1 kW LLC resonant converter prototype is built to verify the proposed structure. The efficiency can reach as high as 95.4%, and the power density of the power stage is around 830 W/in3.",
"title": ""
},
{
"docid": "c07a0053f43d9e1f98bb15d4af92a659",
"text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.",
"title": ""
},
{
"docid": "2e73406dd4ebd7ba90c9c20a142a9684",
"text": "Blocking characteristics of diamond junction field-effect transistors were evaluated at room temperature (RT) and 200 <sup>°</sup>C. A high source-drain bias (breakdown voltage) of 566 V was recorded at RT, whereas it increased to 608 V at 200 <sup>°</sup>C. The positive temperature coefficient of the breakdown voltage indicates the avalanche breakdown of the device. We found that the breakdown occurred at the drain edge of the p-n junction between p-channel and the n<sup>+</sup>-gates. All four devices measured in this letter showed a maximum gate-drain bias over 500 V at RT and 600 V at 200 <sup>°</sup>C.",
"title": ""
},
{
"docid": "8ec6132195c10eedfa3e2ffa70d271b5",
"text": "Devise metrics. Scientists, social scientists and economists need to design a set of practical indices for tracking progress on each SDG. Ensuring access to sustainable and modern energy for all (goal 7), for example, will require indicators of improvements in energy efficiency and carbon savings from renewable-energy technologies (see go.nature.com/pkij7y). Parameters other than just economic growth must be included, such as income inequality, carbon emissions, population and lifespans 1 .",
"title": ""
},
{
"docid": "7280754ec81098fe38023efcb25871ba",
"text": "In this paper, we present a complete framework to inverse render faces with a 3D Morphable Model (3DMM). By decomposing the image formation process into geometric and photometric parts, we are able to state the problem as a multilinear system which can be solved accurately and efficiently. As we treat each contribution as independent, the objective function is convex in the parameters and a global solution is guaranteed. We start by recovering 3D shape using a novel algorithm which incorporates generalization error of the model obtained from empirical measurements. We then describe two methods to recover facial texture, diffuse lighting, specular reflectance, and camera properties from a single image. The methods make increasingly weak assumptions and can be solved in a linear fashion. We evaluate our findings on a publicly available database, where we are able to outperform an existing state-of-the-art algorithm. We demonstrate the usability of the recovered parameters in a recognition experiment conducted on the CMU-PIE database.",
"title": ""
},
{
"docid": "463543546eeca427eb348df6c019c986",
"text": "Blockchains have recently generated explosive interest from both academia and industry, with many proposed applications. But descriptions of many these proposals are more visionary projections than realizable proposals, and even basic definitions are often missing. We define “blockchain” and “blockchain network”, and then discuss two very different, well known classes of blockchain networks: cryptocurrencies and Git repositories. We identify common primitive elements of both and use them to construct a framework for explicitly articulating what characterizes blockchain networks. The framework consists of a set of questions that every blockchain initiative should address at the very outset. It is intended to help one decide whether or not blockchain is an appropriate approach to a particular application, and if it is, to assist in its initial design stage.",
"title": ""
},
{
"docid": "299e7f7d1c48d4a6a22c88dcf422f7a1",
"text": "Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image classification and target detection. Furthermore, in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of HSI, a few strategies such as L2 regularization and dropout are investigated to avoid overfitting in class data modeling. More importantly, we propose a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery. Finally, in order to further improve the performance, a virtual sample enhanced method is proposed. The proposed approaches are carried out on three widely used hyperspectral data sets: Indian Pines, University of Pavia, and Kennedy Space Center. The obtained results reveal that the proposed models with sparse constraints provide competitive results to state-of-the-art methods. In addition, the proposed deep FE opens a new window for further research.",
"title": ""
},
{
"docid": "753b5f366412b0d6b088baf6cbf154a2",
"text": "We introduce a highly structured family of hard satisfiable 3-SAT formulas corresponding to an ordered spin-glass model from statistical physics. This model has provably “glassy” behavior; that is, it has many local optima with large energy barriers between them, so that local search algorithms get stuck and have difficulty finding the true ground state, i.e., the unique satisfying assignment. We test the hardness of our formulas with two complete Davis-Putnam solvers, Satz and zChaff, and two incomplete solvers, WalkSAT and the recently introduced Survey Propagation algorithm SP. We compare our formulas to random XOR-SAT formulas and to two other generators of hard satisfiable instances, the minimum disagreement parity formulas of Crawford et al., and Hirsch’s hgen2. For the complete solvers the running time of our formulas grows exponentially in √ n, and exceeds that of random XOR-SAT formulas for small problem sizes. More interestingly, our formulas appear to be harder for WalkSAT than any other known generator of satisfiable instances.",
"title": ""
},
{
"docid": "cac3a510f876ed255ff87f2c0db2ed8e",
"text": "The resurgence of cancer immunotherapy stems from an improved understanding of the tumor microenvironment. The PD-1/PD-L1 axis is of particular interest, in light of promising data demonstrating a restoration of host immunity against tumors, with the prospect of durable remissions. Indeed, remarkable clinical responses have been seen in several different malignancies including, but not limited to, melanoma, lung, kidney, and bladder cancers. Even so, determining which patients derive benefit from PD-1/PD-L1-directed immunotherapy remains an important clinical question, particularly in light of the autoimmune toxicity of these agents. The use of PD-L1 (B7-H1) immunohistochemistry (IHC) as a predictive biomarker is confounded by multiple unresolved issues: variable detection antibodies, differing IHC cutoffs, tissue preparation, processing variability, primary versus metastatic biopsies, oncogenic versus induced PD-L1 expression, and staining of tumor versus immune cells. Emerging data suggest that patients whose tumors overexpress PD-L1 by IHC have improved clinical outcomes with anti-PD-1-directed therapy, but the presence of robust responses in some patients with low levels of expression of these markers complicates the issue of PD-L1 as an exclusionary predictive biomarker. An improved understanding of the host immune system and tumor microenvironment will better elucidate which patients derive benefit from these promising agents.",
"title": ""
},
{
"docid": "06a1d19d18e1f23cd252c34b8b9aa0ec",
"text": "To solve crimes, investigators often rely on interviews with witnesses, victims, or criminals themselves. The interviews are transcribed and the pertinent data is contained in narrative form. To solve one crime, investigators may need to interview multiple people and then analyze the narrative reports. There are several difficulties with this process: interviewing people is time consuming, the interviews - sometimes conducted by multiple officers - need to be combined, and the resulting information may still be incomplete. For example, victims or witnesses are often too scared or embarrassed to report or prefer to remain anonymous. We are developing an online reporting system that combines natural language processing with insights from the cognitive interview approach to obtain more information from witnesses and victims. We report here on information extraction from police and witness narratives. We achieved high precision, 94% and 96% and recall, 85% and 90%, for both narrative types.",
"title": ""
},
{
"docid": "898b5800e6ff8a599f6a4ec27310f89a",
"text": "Jenni Anttonen: Using the EMFi chair to measure the user's emotion-related heart rate responses Master's thesis, 55 pages, 2 appendix pages May 2005 The research reported here is part of a multidisciplinary collaborative project that aimed at developing embedded measurement devices using electromechanical film (EMFi) as a basic measurement technology. The present aim was to test if an unobtrusive heart rate measurement device, the EMFi chair, had the potential to detect heart rate changes associated with emotional stimulation. Six-second long visual, auditory, and audiovisual stimuli with negative, neutral, and positive emotional content were presented to 24 participants. Heart rate responses were measured with the EMFi chair and with earlobe photoplethysmography (PPG). Also, subjective ratings of the stimuli were collected. Firstly, the high correlation between the measurement results of the EMFi chair and PPG, r = 0.99, p < 0.001, indicated that the EMFi chair measured heart rate reliably. Secondly, heart rate showed a decelerating response to visual, auditory, and audiovisual emotional stimulation. The emotional stimulation caused statistically significant changes in heart rate at the 6 th second from stimulus onset so that the responses to negative stimulation were significantly lower than the responses to positive stimulation. The results were in line with previous research. The results show that heart rate responses measured with the EMFi chair differed significantly for positive and negative emotional stimulation. These results suggest that the EMFi chair could be used in HCI to measure the user's emotional responses unobtrusively.",
"title": ""
},
{
"docid": "8750fc51d19bbf0cbae2830638f492fd",
"text": "Smartphones are increasingly becoming an ordinary part of our daily lives. With their remarkable capacity, applications used in these devices are extremely varied. In terms of language teaching, the use of these applications has opened new windows of opportunity, innovatively shaping the way instructors teach and students learn. This 4 week-long study aimed to investigate the effectiveness of a mobile application on teaching 40 figurative idioms from the Michigan Corpus of Academic Spoken English (MICASE) corpus compared to traditional activities. Quasi-experimental research design with pretest and posttest was employed to determine the differences between the scores of the control (n=25) and the experimental group (n=25) formed with convenience sampling. Results indicate that participants in the experimental group performed significantly better in the posttest, demonstrating the effectiveness of the mobile application used in this study on learning idioms. The study also provides recommendations towards the use of mobile applications in teaching vocabulary.",
"title": ""
},
{
"docid": "da48aae7960f0871c91d4c6c9f5f44bf",
"text": "It is often difficult to ground text to precise time intervals due to the inherent uncertainty arising from either missing or multiple expressions at year, month, and day time granularities. We address the problem of estimating an excerpt-time model capturing the temporal scope of a given news article excerpt as a probability distribution over chronons. For this, we propose a semi-supervised distribution propagation framework that leverages redundancy in the data to improve the quality of estimated time models. Our method generates an event graph with excerpts as nodes and models various inter-excerpt relations as edges. It then propagates empirical excerpt-time models estimated for temporally annotated excerpts, to those that are strongly related but miss annotations. In our experiments, we first generate a test query set by randomly sampling 100 Wikipedia events as queries. For each query, making use of a standard text retrieval model, we then obtain top-10 documents with an average of 150 excerpts. From these, each temporally annotated excerpt is considered as gold standard. The evaluation measures are first computed for each gold standard excerpt for a single query, by comparing the estimated model with our method to the empirical model from the original expressions. Final scores are reported by averaging over all the test queries. Experiments on the English Gigaword corpus show that our method estimates significantly better time models than several baselines taken from the literature.",
"title": ""
},
{
"docid": "1062f37de56db35202f8979a7ea88efd",
"text": "This paper attempts to evaluate the anti-inflammatory potential and the possible mechanism of action of the leaf extracts and isolated compound(s) of Aerva sanguinolenta (Amaranthaceae), traditionally used in ailments related to inflammation. The anti-inflammatory activity of ethanol extract (ASE) was evaluated by acute, subacute and chronic models of inflammation, while a new cerebroside (‘trans’, ASE-1), isolated from the bioactive ASE and characterized spectroscopically, was tested by carrageenan-induced mouse paw oedema and protein exudation model. To understand the underlying mechanism, we measured the release of pro-inflammatory mediators such as nitric oxide (NO) and prostaglandin (PG)E2, along with the cytokines like tumour necrosis factor (TNF)-α, and interleukins(IL)-1β, IL-6 and IL-12 from lipopolysaccharide (LPS)-stimulated peritoneal macrophages. The results revealed that ASE at 400 mg/kg caused significant reduction of rat paw oedema, granuloma and exudative inflammation, while the inhibition of mouse paw oedema and exudative inflammation by ASE-1 (20 mg/kg) was comparable to that of the standard drug indomethacin (10 mg/kg). Interestingly, both ASE and ASE-1 showed significant inhibition of the expressions of iNOS2 and COX-2, and the down-regulation of the expressions of IL-1β, IL-6, IL-12 and TNF-α, in LPS-stimulated macrophages, via the inhibition of COX-2-mediated PGE2 release. Thus, our results validated the traditional use of A. sanguinolenta leaves in inflammation management.",
"title": ""
},
{
"docid": "4add7de7ed94bc100de8119ebd74967e",
"text": "Wireless signal strength is susceptible to the phenomena of interference, jumping, and instability, which often appear in the positioning results based on Wi-Fi field strength fingerprint database technology for indoor positioning. Therefore, a Wi-Fi and PDR (pedestrian dead reckoning) real-time fusion scheme is proposed in this paper to perform fusing calculation by adaptively determining the dynamic noise of a filtering system according to pedestrian movement (straight or turning), which can effectively restrain the jumping or accumulation phenomena of wireless positioning and the PDR error accumulation problem. Wi-Fi fingerprint matching typically requires a quite high computational burden: To reduce the computational complexity of this step, the affinity propagation clustering algorithm is adopted to cluster the fingerprint database and integrate the information of the position domain and signal domain of respective points. An experiment performed in a fourth-floor corridor at the School of Environment and Spatial Informatics, China University of Mining and Technology, shows that the traverse points of the clustered positioning system decrease by 65%–80%, which greatly improves the time efficiency. In terms of positioning accuracy, the average error is 4.09 m through the Wi-Fi positioning method. However, the positioning error can be reduced to 2.32 m after integration of the PDR algorithm with the adaptive noise extended Kalman filter (EKF).",
"title": ""
},
{
"docid": "df29784edea11d395547ca23830f2f62",
"text": "The clinical efficacy of current antidepressant therapies is unsatisfactory; antidepressants induce a variety of unwanted effects, and, moreover, their therapeutic mechanism is not clearly understood. Thus, a search for better and safer agents is continuously in progress. Recently, studies have demonstrated that zinc and magnesium possess antidepressant properties. Zinc and magnesium exhibit antidepressant-like activity in a variety of tests and models in laboratory animals. They are active in forced swim and tail suspension tests in mice and rats, and, furthermore, they enhance the activity of conventional antidepressants (e.g., imipramine and citalopram). Zinc demonstrates activity in the olfactory bulbectomy, chronic mild and chronic unpredictable stress models in rats, while magnesium is active in stress-induced depression-like behavior in mice. Clinical studies demonstrate that the efficacy of pharmacotherapy is enhanced by supplementation with zinc and magnesium. The antidepressant mechanisms of zinc and magnesium are discussed in the context of glutamate, brain-derived neurotrophic factor (BDNF) and glycogen synthase kinase-3 (GSK-3) hypotheses. All the available data indicate the importance of zinc and magnesium homeostasis in the psychopathology and therapy of affective disorders.",
"title": ""
},
{
"docid": "feef714b024ad00086a5303a8b74b0a4",
"text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.",
"title": ""
},
{
"docid": "332d517d07187d2403a672b08365e5ef",
"text": "Please cite this article in press as: C. Galleguillos doi:10.1016/j.cviu.2010.02.004 The goal of object categorization is to locate and identify instances of an object category within an image. Recognizing an object in an image is difficult when images include occlusion, poor quality, noise or background clutter, and this task becomes even more challenging when many objects are present in the same scene. Several models for object categorization use appearance and context information from objects to improve recognition accuracy. Appearance information, based on visual cues, can successfully identify object classes up to a certain extent. Context information, based on the interaction among objects in the scene or global scene statistics, can help successfully disambiguate appearance inputs in recognition tasks. In this work we address the problem of incorporating different types of contextual information for robust object categorization in computer vision. We review different ways of using contextual information in the field of object categorization, considering the most common levels of extraction of context and the different levels of contextual interactions. We also examine common machine learning models that integrate context information into object recognition frameworks and discuss scalability, optimizations and possible future approaches. 2010 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
62c2666f78c3d16db14c058e12d2651c
|
Behavioral Assessment of Emotion Discrimination, Emotion Regulation, and Cognitive Control in Childhood, Adolescence, and Adulthood
|
[
{
"docid": "9beaf6c7793633dceca0c8df775e8959",
"text": "The course, antecedents, and implications for social development of effortful control were examined in this comprehensive longitudinal study. Behavioral multitask batteries and parental ratings assessed effortful control at 22 and 33 months (N = 106). Effortful control functions encompassed delaying, slowing down motor activity, suppressing/initiating activity to signal, effortful attention, and lowering voice. Between 22 and 33 months, effortful control improved considerably, its coherence increased, it was stable, and it was higher for girls. Behavioral and parent-rated measures converged. Children's focused attention at 9 months, mothers' responsiveness at 22 months, and mothers' self-reported socialization level all predicted children's greater effortful control. Effortful control had implications for concurrent social development. Greater effortful control at 22 months was linked to more regulated anger, and at 33 months, to more regulated anger and joy and to stronger restraint.",
"title": ""
}
] |
[
{
"docid": "f513165fd055b04544dff6eb5b7ec771",
"text": "Low power wide area (LPWA) networks are attracting a lot of attention primarily because of their ability to offer affordable connectivity to the low-power devices distributed over very large geographical areas. In realizing the vision of the Internet of Things, LPWA technologies complement and sometimes supersede the conventional cellular and short range wireless technologies in performance for various emerging smart city and machine-to-machine applications. This review paper presents the design goals and the techniques, which different LPWA technologies exploit to offer wide-area coverage to low-power devices at the expense of low data rates. We survey several emerging LPWA technologies and the standardization activities carried out by different standards development organizations (e.g., IEEE, IETF, 3GPP, ETSI) as well as the industrial consortia built around individual LPWA technologies (e.g., LoRa Alliance, Weightless-SIG, and Dash7 alliance). We further note that LPWA technologies adopt similar approaches, thus sharing similar limitations and challenges. This paper expands on these research challenges and identifies potential directions to address them. While the proprietary LPWA technologies are already hitting the market with large nationwide roll-outs, this paper encourages an active engagement of the research community in solving problems that will shape the connectivity of tens of billions of devices in the next decade.",
"title": ""
},
{
"docid": "4ff5953f4c81a6c77f46c66763d791dc",
"text": "We propose a system that finds text in natural scenes using a variety of cues. Our novel data-driven method incorporates coarse-to-fine detection of character pixels using convolutional features (Text-Conv), followed by extracting connected components (CCs) from characters using edge and color features, and finally performing a graph-based segmentation of CCs into words (Word-Graph). For Text-Conv, the initial detection is based on convolutional feature maps similar to those used in Convolutional Neural Networks (CNNs), but learned using Convolutional k-means. Convolution masks defined by local and neighboring patch features are used to improve detection accuracy. The Word-Graph algorithm uses contextual information to both improve word segmentation and prune false character/word detections. Different definitions for foreground (text) regions are used to train the detection stages, some based on bounding box intersection, and others on bounding box and pixel intersection. Our system obtains pixel, character, and word detection f-measures of 93.14%, 90.26%, and 86.77% respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems. This approach may work for other detection targets with homogenous color in natural scenes.",
"title": ""
},
{
"docid": "b163fb3faa31f6db35599d32d7946523",
"text": "Humans learn how to behave directly through environmental experience and indirectly through rules and instructions. Behavior analytic research has shown that instructions can control behavior, even when such behavior leads to sub-optimal outcomes (Hayes, S. (Ed.). 1989. Rule-governed behavior: cognition, contingencies, and instructional control. Plenum Press.). Here we examine the control of behavior through instructions in a reinforcement learning task known to depend on striatal dopaminergic function. Participants selected between probabilistically reinforced stimuli, and were (incorrectly) told that a specific stimulus had the highest (or lowest) reinforcement probability. Despite experience to the contrary, instructions drove choice behavior. We present neural network simulations that capture the interactions between instruction-driven and reinforcement-driven behavior via two potential neural circuits: one in which the striatum is inaccurately trained by instruction representations coming from prefrontal cortex/hippocampus (PFC/HC), and another in which the striatum learns the environmentally based reinforcement contingencies, but is \"overridden\" at decision output. Both models capture the core behavioral phenomena but, because they differ fundamentally on what is learned, make distinct predictions for subsequent behavioral and neuroimaging experiments. Finally, we attempt to distinguish between the proposed computational mechanisms governing instructed behavior by fitting a series of abstract \"Q-learning\" and Bayesian models to subject data. The best-fitting model supports one of the neural models, suggesting the existence of a \"confirmation bias\" in which the PFC/HC system trains the reinforcement system by amplifying outcomes that are consistent with instructions while diminishing inconsistent outcomes.",
"title": ""
},
{
"docid": "4fea04d8f04012b0dbbf45a6ab3a5951",
"text": "Nowadays large-scale distributed machine learning systems have been deployed to support various analytics and intelligence services in IT firms. To train a large dataset and derive the prediction/inference model, e.g., a deep neural network, multiple workers are run in parallel to train partitions of the input dataset, and update shared model parameters. In a shared cluster handling multiple training jobs, a fundamental issue is how to efficiently schedule jobs and set the number of concurrent workers to run for each job, such that server resources are maximally utilized and model training can be completed in time. Targeting a distributed machine learning system using the parameter server framework, $w$ e design an online algorithm for scheduling the arriving jobs and deciding the adjusted numbers of concurrent workers and parameter servers for each job over its course, to maximize overall utility of all jobs, contingent on their completion times. Our online algorithm design utilizes a primal-dual framework coupled with efficient dual subroutines, achieving good long-term performance guarantees with polynomial time complexity. Practical effectiveness of the online algorithm is evaluated using trace-driven simulation and testbed experiments, which demonstrate its outperformance as compared to commonly adopted scheduling algorithms in today's cloud systems.",
"title": ""
},
{
"docid": "27775805c45a82cbd31fd9a5e93f3df1",
"text": "In a dynamic world, mechanisms allowing prediction of future situations can provide a selective advantage. We suggest that memory systems differ in the degree of flexibility they offer for anticipatory behavior and put forward a corresponding taxonomy of prospection. The adaptive advantage of any memory system can only lie in what it contributes for future survival. The most flexible is episodic memory, which we suggest is part of a more general faculty of mental time travel that allows us not only to go back in time, but also to foresee, plan, and shape virtually any specific future event. We review comparative studies and find that, in spite of increased research in the area, there is as yet no convincing evidence for mental time travel in nonhuman animals. We submit that mental time travel is not an encapsulated cognitive system, but instead comprises several subsidiary mechanisms. A theater metaphor serves as an analogy for the kind of mechanisms required for effective mental time travel. We propose that future research should consider these mechanisms in addition to direct evidence of future-directed action. We maintain that the emergence of mental time travel in evolution was a crucial step towards our current success.",
"title": ""
},
{
"docid": "2efe399d3896f78c6f152d98aa6d33a0",
"text": "We consider the problem of verifying the identity of a distribution: Given the description of a distribution over a discrete support p = (p<sub>1</sub>, p<sub>2</sub>, ... , p<sub>n</sub>), how many samples (independent draws) must one obtain from an unknown distribution, q, to distinguish, with high probability, the case that p = q from the case that the total variation distance (L<sub>1</sub> distance) ||p - q||1≥ ϵ? We resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants c, c' and a function f(p, ϵ) on distributions and error parameters, such that our tester distinguishes p = q from ||p-q||1≥ ϵ using f(p, ϵ) samples with success probability > 2/3, but no tester can distinguish p = q from ||p - q||1≥ c · ϵ when given c' · f(p, ϵ) samples. The function f(p, ϵ) is upperbounded by a multiple of ||p||2/3/ϵ<sup>2</sup>, but is more complicated, and is significantly smaller in some cases when p has many small domain elements, or a single large one. This result significantly generalizes and tightens previous results: since distributions of support at most n have L<sub>2/3</sub> norm bounded by √n, this result immediately shows that for such distributions, O(√n/ϵ<sup>2</sup>) samples suffice, tightening the previous bound of O(√npolylog/n<sup>4</sup>) for this class of distributions, and matching the (tight) known results for the case that p is the uniform distribution over support n. The analysis of our very simple testing algorithm involves several hairy inequalities. To facilitate this analysis, we give a complete characterization of a general class of inequalities- generalizing Cauchy-Schwarz, Holder's inequality, and the monotonicity of L<sub>p</sub> norms. Specifically, we characterize the set of sequences (a)<sub>i</sub> = a<sub>1</sub>, . . . , ar, (b)i = b<sub>1</sub>, . . . , br, (c)i = c<sub>1</sub>, ... , cr, for which it holds that for all finite sequences of positive numbers (x)<sub>j</sub> = x<sub>1</sub>,... and (y)<sub>j</sub> = y<sub>1</sub>,...,Π<sub>i=1</sub><sup>r</sup> (Σ<sub>j</sub>x<sup>a</sup><sub>j</sub><sup>i</sup><sub>y</sub><sub>i</sub><sup>b</sup><sup>i</sup>)<sup>ci</sup>≥1. For example, the standard Cauchy-Schwarz inequality corresponds to the sequences a = (1, 0, 1/2), b = (0,1, 1/2), c = (1/2 , 1/2 , -1). Our characterization is of a non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought throu.gh trial and error, by hand. We do not believe such a characterization has appeared in the literature, and hope its computational nature will be useful to others, and facilitate analyses like the one here.",
"title": ""
},
{
"docid": "6f0283efa932663c83cc2c63d19fd6cf",
"text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.",
"title": ""
},
{
"docid": "c94460bfeeec437b751e987f399778c0",
"text": "The Steiner packing problem is to find the maximum number of edge-disjoint subgraphs of a given graph G that connect a given set of required points S. This problem is motivated by practical applications in VLSI- layout and broadcasting, as well as theoretical reasons. In this paper, we study this problem and present an algorithm with an asymptotic approximation factor of |S|/4. This gives a sufficient condition for the existence of k edge-disjoint Steiner trees in a graph in terms of the edge-connectivity of the graph. We will show that this condition is the best possible if the number of terminals is 3. At the end, we consider the fractional version of this problem, and observe that it can be reduced to the minimum Steiner tree problem via the ellipsoid algorithm.",
"title": ""
},
{
"docid": "1a91e143f4430b11f3af242d6e07cbba",
"text": "Random graph matching refers to recovering the underlying vertex correspondence between two random graphs with correlated edges; a prominent example is when the two random graphs are given by Erdős-Rényi graphs G(n, d n ). This can be viewed as an average-case and noisy version of the graph isomorphism problem. Under this model, the maximum likelihood estimator is equivalent to solving the intractable quadratic assignment problem. This work develops an Õ(nd + n)-time algorithm which perfectly recovers the true vertex correspondence with high probability, provided that the average degree is at least d = Ω(log n) and the two graphs differ by at most δ = O(log−2(n)) fraction of edges. For dense graphs and sparse graphs, this can be improved to δ = O(log−2/3(n)) and δ = O(log−2(d)) respectively, both in polynomial time. The methodology is based on appropriately chosen distance statistics of the degree profiles (empirical distribution of the degrees of neighbors). Before this work, the best known result achieves δ = O(1) and n ≤ d ≤ n for some constant c with an n-time algorithm [BCL18] and δ = Õ((d/n)) and d = Ω̃(n) with a polynomial-time algorithm [DCKG18].",
"title": ""
},
{
"docid": "08731e24a7ea5e8829b03d79ef801384",
"text": "A new power-rail ESD clamp circuit designed with PMOS as main ESD clamp device has been proposed and verified in a 65nm 1.2V CMOS process. The new proposed design with adjustable holding voltage controlled by the ESD detection circuit has better immunity against mis-trigger or transient-induced latch-on event. The layout area and the standby leakage current of this new proposed design are much superior to that of traditional RC-based power-rail ESD clamp circuit with NMOS as main ESD clamp device.",
"title": ""
},
{
"docid": "29025f061a22aed656e8d24416c52002",
"text": "This contribution deals with the Heeger-Bergen pyramid-based texture analysis/synthesis algorithm. It brings a detailed explanation of the original algorithm tested on many characteristic examples. Our analysis reproduces the original results, but also brings a minor improvement concerning non-periodic textures. Inspired by visual perception theories, Heeger and Bergen proposed to characterize a texture by its first-order statistics of both its color and its responses to multiscale and multi-orientation filters, namely the steerable pyramid. The Heeger-Bergen algorithm consists in the following procedure: starting from a white noise image, histogram matchings are performed to the image alternately in the image domain and the steerable pyramid domain, so that the corresponding output histograms match the ones of the input texture. Source Code An on-line demo1 of the Heeger-Bergen pyramid-based texture synthesis algorithm is available. The demo permits to upload a color image to extract a subimage and to run the texture synthesis algorithm on this subimage. The algorithm available in the demo is a slightly improved version treating non-periodic textures by a “periodic+smooth” decomposition [13]. The algorithm works with color textures and is able to synthesize textures with larger size than the input image. The original version of the Heeger-Bergen algorithm (where the boundaries are handled by mirror symmetrization) is optional in the source code. An ANSI C implementation is available for download here2. It is provided with: • An illustrated html documentation; • Source code; This code requires libpng, libfftw3, openmp, and getopt. Compilation and usage instructions are included in the README.txt file of the zip archive. The illustrated HTML documentation can be reproduced from the source code by using doxygen (see the README.txt file of the zip archive for details).",
"title": ""
},
{
"docid": "abc48ae19e2ea1e1bb296ff0ccd492a2",
"text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also",
"title": ""
},
{
"docid": "4a6e382b9db87bf5915fec8de4a67b55",
"text": "BACKGROUND\nThe aim of the study is to analyze the nature, extensions, and dural relationships of hormonally inactive giant pituitary tumors. The relevance of the anatomic relationships to surgery is analyzed.\n\n\nMETHODS\nThere were 118 cases of hormonally inactive pituitary tumors analyzed with the maximum dimension of more than 4 cm. These cases were surgically treated in our neurosurgical department from 1995 to 2002. Depending on the anatomic extensions and the nature of their meningeal coverings, these tumors were divided into 4 grades. The grades reflected an increasing order of invasiveness of adjacent dural and arachnoidal compartments. The strategy and outcome of surgery and radiotherapy was analyzed for these 4 groups. Average duration of follow-up was 31 months.\n\n\nRESULTS\nThere were 54 giant pituitary tumors, which remained within the confines of sellar dura and under the diaphragma sellae and did not enter into the compartment of cavernous sinus (Grade I). Transgression of the medial wall and invasion into the compartment of the cavernous sinus (Grade II) was seen in 38 cases. Elevation of the dura of the superior wall of the cavernous sinus and extension of this elevation into various compartments of brain (Grade III) was observed in 24 cases. Supradiaphragmatic-subarachnoid extension (Grade IV) was seen in 2 patients. The majority of patients were treated by transsphenoidal route.\n\n\nCONCLUSIONS\nGiant pituitary tumors usually have a meningeal cover and extend into well-defined anatomic pathways. Radical surgery by a transsphenoidal route is indicated and possible in Grade I-III pituitary tumors. Such a strategy offers a reasonable opportunity for recovery in vision and a satisfactory postoperative and long-term outcome. Biopsy of the tumor followed by radiotherapy could be suitable for Grade IV pituitary tumors.",
"title": ""
},
{
"docid": "a75e29521b04d5e09228918e4ed560a6",
"text": "This study assessed motives for social network site (SNS) use, group belonging, collective self-esteem, and gender effects among older adolescents. Communication with peer group members was the most important motivation for SNS use. Participants high in positive collective self-esteem were strongly motivated to communicate with peer group via SNS. Females were more likely to report high positive collective self-esteem, greater overall use, and SNS use to communicate with peers. Females also posted higher means for group-in-self, passing time, and entertainment. Negative collective self-esteem correlated with social compensation, suggesting that those who felt negatively about their social group used SNS as an alternative to communicating with other group members. Males were more likely than females to report negative collective self-esteem and SNS use for social compensation and social identity gratifications.",
"title": ""
},
{
"docid": "dd32079de1ca0b5cac5b2dc5fc146d17",
"text": "In this paper, we propose a new authentication method to prevent authentication vulnerability of Claim Token method of Membership Service provide in Private BlockChain. We chose Hyperledger Fabric v1.0 using JWT authentication method of membership service. TOTP, which generate OTP tokens and user authentication codes that generate additional time-based password on existing authentication servers, has been applied to enforce security and two-factor authentication method to provide more secure services.",
"title": ""
},
{
"docid": "cb1a99cc1bb705d8ad5f26cc9a61e695",
"text": "In the smart grid system, dynamic pricing can be an efficient tool for the service provider which enables efficient and automated management of the grid. However, in practice, the lack of information about the customers' time-varying load demand and energy consumption patterns and the volatility of electricity price in the wholesale market make the implementation of dynamic pricing highly challenging. In this paper, we study a dynamic pricing problem in the smart grid system where the service provider decides the electricity price in the retail market. In order to overcome the challenges in implementing dynamic pricing, we develop a reinforcement learning algorithm. To resolve the drawbacks of the conventional reinforcement learning algorithm such as high computational complexity and low convergence speed, we propose an approximate state definition and adopt virtual experience. Numerical results show that the proposed reinforcement learning algorithm can effectively work without a priori information of the system dynamics.",
"title": ""
},
{
"docid": "0b67a35902f4a027032e5b9034997342",
"text": "In order to make software applications simpler to write and easier to maintain, a software digital signal processing library that performs essential signal and image processing functions is an important part of every DSP developer’s toolset. In general, such a library provides high-level interface and mechanisms, therefore developers only need to known how to use algorithm, not the details of how they work. Then, complex signal transformations become function calls, e.g. C-callable functions. Considering the 2-D convolver function as an example of great significance for DSPs, this work proposes to replace this software function by an emulation on a FPGA initially configured by software programming. Therefore, the exploration of the 2-D convolver’s design space will provide guidelines for the development of a library of DSP-oriented hardware configurations intended to significantly speed-up the performance of general DSP processors. Based on the specific convolver, and considering operators supported in the library as hardware accelerators, a series of trade-offs for efficiently exploiting the bandwidth between the general purpose DSP and the accelerators are proposed. In terms of implementation, this work explores the performance and architectural tradeoffs involved in the design of an FPGA-based 2D convolution coprocessor for the TMS320C40 DSP microprocessor from Texas Instruments. However, the proposed concept is not limited to a particular processor. Copyright 1999 IEEE . This paper is an extended version of a paper accepted in the IEEE VLSI Systems Transaction. The paper will be published in 1999. Personnel use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collectives works for resale or redistribution to servers or lists, or to reuse any copyrithed component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE service Center / 445 Hoes Lane / P.O. Box 1331 / Pistacataway, NJ 08855-1331, USA. Telephone: + Intl. 732562-3966.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "0fd37a459c95b20e3d80021da1bb281d",
"text": "Social media data are increasingly used as the source of research in a variety of domains. A typical example is urban analytics, which aims at solving urban problems by analyzing data from different sources including social media. The potential value of social media data in tourism studies, which is one of the key topics in urban research, however has been much less investigated. This paper seeks to understand the relationship between social media dynamics and the visiting patterns of visitors to touristic locations in real-world cases. By conducting a comparative study, we demonstrate how social media characterizes touristic locations differently from other data sources. Our study further shows that social media data can provide real-time insights of tourists’ visiting patterns in big events, thus contributing to the understanding of social media data utility in tourism studies.",
"title": ""
},
{
"docid": "3df9e73ce61d6168dba668dc9f02078a",
"text": "Web mail search is an emerging topic, which has not been the object of as many studies as traditional Web search. In particular, little is known about the characteristics of mail searchers and of the queries they issue. We study here the characteristics of Web mail searchers, and explore how demographic signals such as location, age, gender, and inferred income, influence their search behavior. We try to understand for instance, whether women exhibit different mail search patterns than men, or whether senior people formulate more precise queries than younger people. We compare our results, obtained from the analysis of a Yahoo Web mail search query log, to similar work conducted in Web and Twitter search. In addition, we demonstrate the value of the user’s personal query log, as well as of the global query log and of the demographic signals, in a key search task: dynamic query auto-completion. We discuss how going beyond users’ personal query logs (their search history) significantly improves the quality of suggestions, in spite of the fact that a user’s mailbox is perceived as being highly personal. In particular, we note the striking value of demographic features for queries relating to companies/organizations, thus verifying our assumption that query completion benefits from leveraging queries issued by “people like me\". We believe that demographics and other such global features can be leveraged in other mail applications, and hope that this work is a first step in this direction.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.