New Way For iOS App Developers to Respond to Their Users

When the iOS 10.3 is finally released it will become with the capability for the developers to respond to the customer feedback. This was announced in the release notes came out with the firmware’s beta build. Apple anticipates that their new improvement for the feedback infrastructure is in high demand and it is what the developer community has been looking for. Android platform has had this feature for quite some time now.

 

 

This new feature on the Apple iOS is a new way for the platform to monitor the App Store reviews and rating. It can also be viewed as an as the expansion of the iOS developer community that includes those who have been dissatisfied with the way that the stores handle the instances such as how the Dev Dash app got so disorganized as a result of the discrepancies of the customer rating.

 

 

The ability for the developer to respond to the rating and reviews of the customer is important since they can explain the issues or clarify what led to the customer’s dissatisfaction not for those customers who are being responded to but for the customer who might have experienced the same problem. This is particularly important for instances where the customer may have misunderstood some of the app’s features or they may be reporting a bug that has already been fixed by the current version.

 

 

This update is going to benefit the users as well. This would translate to the fact that the users are no longer going to need to email the developers for an issue that require a quick update, for instance, paid items not showing up in the app. Apple revealed that the new feedback will be implemented for the Mac App Store probably in the anticipated release of the 10.13 upgrade that is expected released in June. The current MacOS update is 10.12.3 that released this month.

 

 

In addition, the iOS trial version is releasing with a relating feature that goes a long way to enhance user experience. A developer has the ability to do this with the current system. The can request the user to rate the app however upon the user accepting to rate the app they will be directed to the App Store to rate the app. iOS will do away with this process enabling them to do all this without leaving the app.

 

 

 

Here Comes the Android Instant Apps

Google the tech giant has started rolling out the instants apps. During the developer conference known as the Google’s I/O developer that was held last year, Instant Apps for Android was tagged as a feature that would forever alter the way users use their mobile apps on their mobile devices.

 

 

According to the Android Developer blog, it was made known that the feature would be made accessible using a few apps as part of beta testing, through this users can participate in.

 

 

What do the phrase Android Instant Apps mean? Through the use of the Instant App feature, users will have an opportunity to download and run apps on their android gadgets such as tablets as well as smartphones. The users can do all this without having to install the apps.

 

 

While the users are visiting particular websites, the may come across an invitation to download a mobile app instead of using the website. The app will enable the streamlined mobile device experience to get to access the website content, however, the user can forego the installation process of the app and instead access the browser version of the website.

 

 

Through the use of the Instant Apps technology, the Android apps will load immediately on the android devices. While it may not be as fast as accessing the website the whole process will take just a few seconds. This will be much faster compared to downloading the app and installing it first. In order to benefit from this technology the developers will need to update their apps to include this functionality, this is because the Instant Apps utilizes the same technology found in the Android apps.

 

 

The feature is opened for limited testing just about 8 months after it was revealed, however, it is opened for the limited testing only. The initial apps to use this feature will be those Periscope, BuzzFeed, Viki as well as Wish. During this testing period, the android team will be collecting feedback from the user who will be taking part in the testing this amazing feature with the aim of expanding to more apps in the near future. There is also the room for the users who are interested in testing the Instant App, all they have to do id access the website of the initial apps offering this feature.

 

Tesla Committed To Bring Fully Self-Driving Capability to Its Vehicle In The Near Future.

Tesla world leading manufacturer of electric cars has always released groundbreaking technologies in the past. One of the most prominent technologies is the Tesla’s software timeline is autonomous driving. In the technology, there is enhanced autopilot that gives unique features and the full self-driving capability that regardless of its name it will not have a self-driving capability for a while, however, it could be beneficial to the royal Tesla drivers very soon.

 

 

In the past, Tesla had stated that it would be introducing the enhanced autopilot in December 2016 that was late because Tesla begun introducing is just recently, followed by updates every few months or weeks in the anticipation of the fully self-driving capability in the end of 2017.

 

 

This feature would be accessible to certain Tesla owners in the relation to the validation and regulatory permissions in various jurisdictions. For instance, the Michigan passed a law that would make it possible for the self-driving vehicles to be permitted in the public after passing the test.

 

 

Currently, Tesla sells the Enhanced Autopilot as well as Full Self-Driving ability. The Full Self-Driving will only be useful at least towards the end of 2017, a potential Tesla driver can save few dollars while buying the car this is because Tesla charges premium for activating this feature after delivery, apart from this there are other added advantages of but this feature, prior to Tesla is fully capable of delivering self-driving system.

 

 

The Tesla vehicles have 8 cameras, 1 radar, and 360-degress ultrasonic hardware suites, during the Enhanced Autopilot only have the cameras are enabled while the customers get the rest once they buy the Full Self-Driving functionality. Tesla believes that their Full Self-Driving that utilizes all the eight camera of the car is twice as good as the day to day average driver. Thus drivers should be a position to experience the difference between the technologies prior to Tesla introducing the level five autopilot in a vehicle.

 

 

Though there no available fully autonomous vehicle among the Tesla fleet. Tesla has begun introducing the initial stage of the Enhance Autopilot that is not as similar to the first generation of the autonomous system. However, the company makes continuous improvements with the help of the information that is sourced from the owners who has chosen the Full Self-Driving functionality above of the Enhanced Autopilot, these drivers will begin experiencing some improvement thus making full use of the hardware.

 

 

 

Uber’s Self-Driving Cars Tested by Uber ATC Director Raffi Krikoviana

Uber ATC Director, Raffi Kirkoviana tested Uber’s self-driving cars on September 13th, 2016 to collect data involving its advanced technologies and riding experience. It took the Robotic Center at Carneige Mellon University 18 months to research and develop Uber’s latest autonomous technology. A small group of spectators were invited to observe the test drive of 14 Ford Fusions cars stimulated by cameras and radar equipment. The test started at Uber’s Advanced Technologies Campus near Downtown Pittsburgh and extended throughout the city and the Strip District. Kirkoviana shared his experience as a passenger and driver to TechCrunch.

The demostration began by an employee giving Kirkoviana a phone to use to request a ride on the Uber app. When Kirkoviana was positioned in the back seat of the Ford Fusion, he was instructed to select the ready-to-go button on a tablet that displayed a live view surrounding the automobile. He road with an engineer driver and front engineer passenger through Lawrence neighborhood, downtown, Strip District, and over 9th Street Bridge. The drive had good responses and encountered a few struggles when other automobiles were backing up or parked in a lane. The self-driving Ford Fusion also experienced a struggle while positioning on the bridge and approaching a truck.

The engineer driver had to change lanes manually when a big truck was parked in the same lane of the self-driving car and a city worker unexpectedly darted from in front of the truck. The self-driving cars received positive reviews involving its surroundings of stopped transportation, traffic lights and traffic laws. If there was a bus turning or picking up passengers, the Ford Fusion would automatically stop. The Uber’s advanced technology and intelligent software reads the red, blue and yellow traffic lights.

Kirkorian shared his experience driving autonomously back to the Uber Advanced Technologies Campus. He pressed a silver button on the console when a blue light turned-on on the dashboard. The front seat driver is able to return the self-driving back to the control of the driver by pressing a red button, the brake pedal, or accelerator pedal. He had to use the function to avoid a stopped van parked on a lane. After the test drive was over, Kirkorian described the ride as gentle with occasional stops.

Robots Get Manipulative

Artificial intelligence-powered robots seem to intimidate all kinds of intelligent people, from Stephen Hawking to Elan Musk. But how scary are pieces of machinery? They can be powered down. They are programmed by humans. As far as we know the robots at work in the world today, and not the ones of science fiction, aren’t greedy, or vindictive, or even ethically benign. They are inanimate. On another level, robots aren’t able to navigate space very well. Even autonomous cars rely on the vehicle to attain motion. There doesn’t seem to be much of a threat. However these physical limitations also impede the potential for robots to adapt to environments and solve physical problems. A team from the Massachusetts Institute of Technology is about to change that.

 

Nikhil Chaven-Dafle, a graduate student in mechanical engineering, is developing ways for robotic arms to manipulate objects and make use of their environment. For example, we humans take for granted our ability to screw a light bulb into a socket with one hand, while tightly gripping a ladder with the other. We are able to adapt our grip, orient objects and correct mistakes, like threading the light bulb incorrectly, and starting over. We interact with our environment and we use our environment to assist us in manipulating objects.

 

Certainly robots are capable of performing physical tasks. Car manufacturing assembly lines prove that robotic arms are highly efficient and capable of completing repetitive tasks. What Chaven-Dafle is developing is something new. He is working on ways for robotic arms to solve physical, not virtual, problems. Chaven-Dafle explained his project to TechCrunch saying, “We basically developed a formulation that allows robots to estimate how the forces and motions and contacts are going to be involved, and use this underlying model, it can predict how the object is going to move in the grasp.”

 

No word yet on whether robots are capable of manipulating objects in space, or even if that skill poses a threat to humanity. But robots have been safely handling all kinds of tools for quite some time in limited and highly fixed ways.

Snapchat Creator Values its IPO at $24 Billion

In the largest tech IPO since Alibaba went public in 2014, Snap, the parent company of the immensely popular Snapchat app, will price its offering at $17 per share with a $24 billion valuation.

 

The company looks to make a tidy profit, announcing that it would sell around 200 million shares, for a total take of about $3.4 billion. Co-founders Evan Spiegel and Bobby Murphy will cash out a combined 32 million shares, netting them each a sum of $272 million. They will retain just under 211 million shares, giving them 88% voting power between them. The remainder of the $3.4 billion that’s being offloaded will be divided among other executives and early investors.

 

Similarly to Twitter, Snap wasn’t able to turn a profit on its Snapchat app before going public. Moreover, the company’s leadership has stated that it projects its losses to continue to grow going forward. However, in terms of popularity, Snapchat has already surpassed Twitter with 450 million daily users. The company is also beginning to move forward with other products, such as Spectacles — a pair of sunglasses with a built-in camera that streams 10-second video clips to the Snapchat account on your phone.

 

Stranger still, is that the company is asking a lot of faith from potential investors: the IPO will only be offering non-voting shares, meaning Spiegel and his leadership will be in complete control of the company’s future and direction. Even so, as indicated by the size of the valuation, the company is expecting a lot of investor enthusiasm.

 

Snap shares will begin trading on the New York Stock Exchange Thursday, March 2nd, under the ticker SNAP.

 

 

Will YouTube TV See an End to Traditional Television?

Will you finally be able to ditch your cable television provider? That may become increasingly possible in the coming months than ever before, considering YouTube’s latest announcement. An article on TechCrunch explains this new announcement in detail: the unveiling of YouTube’s new live internet television streaming service, YouTube TV.

 

“Cord-cutting,” the term used to describe canceling traditional television subscriptions in favor of internet streaming services, is a growing trend. Why is it becoming so popular? The price of cable television is rising, reaching up to an average of $103.10 a month in 2016. This is what makes internet streaming services so appealing. YouTube announced that YouTube TV will cost $35 a month for a family of six accounts.

 

With a price point like that, could YouTube TV bring the end of traditional television services? That depends on its availability and offerings. While the announcement is still new, here is what we know will be offered with YouTube TV:

 

  • Live TV broadcasts from networks such as ABC, NBC, CBS and Fox.
  • Other networks such as USA, FreeForm, MSNBC, CNBC, Fox News and Fox Business.
  • Sports channels such as ESPN, FoxSports and NBC SportsNet, as well as some regional sports networks.
  • The 28 original series found on YouTube Red.
  • Cloud-based DVR that will never run out of space. Each account will have a personal DVR with tailored recommendations.
  • Visual TV guide.
  • The ability to “cast” content to your television.
  • Use of voice controls.
  • ShowTime and Fox Soccer Plus can be added for an additional fee.

 

YouTube TV is expected to launch in the United States sometime in the coming months. While switching to YouTube TV will save you money, fewer channels will be offered on the internet streaming service than on traditional television. With this in mind, will YouTube TV see an increase in “cord-cutting”? That is still yet to be seen. However, with other non-traditional television services on the rise, such as Sling TV and Hulu’s own plans for an internet television streaming service, cord-cutting continues to become a more viable option year after year.

 

Company Security is Turning to Robots for Extra Eyes and Ears

With security being a serious issue these days, the cost of security personnel for businesses and schools is significant. To address this situation, Cobalt Robotics Inc., based in Palo-Alto, California, created a robot that will enhance building security without needing to cut more weekly paychecks or deal with other personnel issues.

 

The approximately 4-foot tall robots, which look something like large blue and silver bishop playing pieces from a chessboard, are not capable of replacing human security guards. Instead, these little buddies glide around a floor of the building looking for things that might be out of the ordinary, such as people in the office after hours, the sound of a window breaking or possible water leaks.

 

These units are designed for indoor use only. Using a microphone and cameras, audio is detected and people can be videotaped. Whereas wall-mounted security cameras are stationary, these robots with artificial intelligence are mobile. Again, they are not created to replace existing security camera systems but to complement them.

 

The Cobalt Robot has 60 sensors, including daytime, nighttime and wide-angle cameras, ultrasound, lidar and depth sensors. This is the same technology found in  self-driving cars used to sense the vehicle’s external environment.

 

The 2-way video chat and text screen allows a security guard in another part of the building to communicate with the person that the robot has approached. Often, the security guard will ask the person to scan his employee ID badge using the RFID technology found on the front of the robot.

 

Cobalt’s target market is companies with large or complex buildings, such as hospitals, museums, warehouses, office buildings and schools. The company expects to get a portion of the physical security market, which is expected to reach $110 billion within three years.

 

Currently, these innovative devices are in pilot-program mode. Plans for future development include flagging changes in the building and tagging assets, such as computers, TVs, inventory and other devices of value.

 

Investors in this project like the fact that that add additional technology can be added to future models. The robots will not become obsolete because the designers will be able to incorporate new features and keep up with demands of businesses’ growing and changing security needs.

 

Companies that need more security but need to keep the costs under control will find the Cobalt Robots can patrol floors and examine corners, freeing up security personnel for tasks that only humans can do, such as escorting someone from the building or investigate something that the robot communicates as unusual activity.

Verizon Wireless Penalizes Users Who Exceed 200GB Per Month

Verizon is doing everything it can to get a handle on data usage. It has recently announced that it is disconnecting users who use more than 200GB of data within a month. This brings into questions as to the meaning of unlimited data. However, it is important to keep in mind that Verizon wireless has dropped their unlimited data plans more than a couple of years ago. However, it has grandfather the users who have held onto the plan. Users who have averaged more than 200GB of data will either be urged to sign up for new contracts. The other option is that they get disconnected.

While this does seem harsh when it comes to unlimited data use, one must admit that 200GB is a lot. One would have to be online all day everyday. That said, unlimited data is supposed to mean unlimited data. Therefore, that would be a slap to the face for people that are used to not worrying about any data caps. The other thing to consider is that this move might actually push people away from Verizon wireless to mobile carriers that still offer unlimited data.

There is enough controversy when it comes to unlimited data plans to begin with. A lot of people have to deal with being throttled which they don’t like. However, it does say that only a certain amount of data is high speed data. The rest of the data is still unlimited. However, it will only be downloaded at slower speeds.

Beneficial Deception Keeps Users Happy

Software developers and designers employ various tactics to optimize user experience, but one seemingly devious tactic is quite helpful to both users and brand websites. The practice of beneficial deception isn’t new but tech writer Kaveh Waddell recently investigated the phenomenon after experiencing it himself. While filing his taxes he noticed that the Turbo Tax progress bars seemed a little quirky. They were running too smoothly and taking their time. He wondered if the program was actually double and triple checking his returns like it promised. He also wondered if it should take that long, considering that the program must have been processing the return as he was filling it out. Waddell was correct. The Turbo Tax progress bar was faking it.

 

After consulting with fellow techie Andrew McGill, the two looked through the program’s source code and found that it ran separately from the actual tax processing portion of the program. It also ran for the same length of time and in the exact same manner for every single Turbo Tax user. According to Turbo Tax, the delay and the bright animations help ease users’ tax anxieties. The graphics provide a bit of time for users to calm down, build confidence and trust the software. Although most other uses of beneficial deception hide faults or delays, the Turbo Tax program developers purposefully create a waiting period.

 

Not all of this is placed squarely on the developers, of course. User experience experts, test studies, control groups and human behavioral scientists also provide insight into optimal user experiences. This particular tactic, of extending a wait period, also serves to heighten user suspense and creates a more satisfying conclusion to an otherwise easily automated and instantaneous result. It seems people like feeling a little bit nervous, but only a little bit and for only a little while.