sip5060.net / en es
Let’s face it: launching a UCaaS (Unified Communications as a Service) system from scratch can seem pretty daunting. This type of cloud phone system incorporates many different types of communication / collaboration services to help your business succeed. Whether you’re talking about voice, video, messaging, meetings, presence or team-based collaboration, they’re all traditionally available as options.
And while the hallmark of a UCaaS environment is its ability to bring together all of these elements, there are many (frequently overwhelming) considerations you have to make on the back end to determine if and how it can work for you. Then, there are past investments. Over the years, many businesses like yours have spent significant time and money toward implementing many traditional communication and collaboration options that are now outdated for the modern landscape.
Previous choices, however, should not complicate matters when considering when, how and why to make that jump to an all-encompassing UCaaS infrastructure.
In most cases, your provider may be able to handle the dirty work for you. The key is to select the specific features and elements that your organization needs, and see what sorts of deployment options are available from the host.
So … whether you’re starting out from scratch or migrating to a full-on – or even hybrid – UCaaS environment, most of this rollout involves the streamlining and automation of a la carte options.
Communications and collaboration options abound, but are not all necessary in every business case. From a UCaaS perspective, it can be scaled as necessary, with features added or taken away in an a la carte fashion as your organization’s needs change. In other words, UCaaS allows you to establish custom communications infrastructure exactly as you need while also supporting future growth initiatives.
For that reason, you should carefully evaluate the options and capabilities made available to you by the various providers, weighing them against your own unique needs.
One of UCaaS’ best traits is how optimal it is for remote and telework environments. This cloud-based system can be deployed across many different types of communication channels and devices. The flexibility it extends to YOUR deployment, however, should run so much deeper.
In particular, focus on factors such as extending customer communications channels and streamlining CRM services with these options.
With flexibility comes the need for reliability, a factor that can largely be out of many organizations’ control. Sadly, the stability of a UCaaS is largely tied to the state of your – and your remote workers’ – internet connection, as well as the relative strength of the service provider and UCaaS architecture that it provides.
In most cases, the smart move is to choose a deployment that guarantees virtually unlimited uptime and availability, as well as geo-redundancy for better backup integrity. Many vendors claim accessibility to these. Reading customer and peer reviews, however, can often tell a different story.
UCaaS users tend to be the most productive when they can communicate and collaborate consistently in real time, as if seated in the same room as one another.
Like most extensive software deployments, the price tag will almost always play a factor. Choose a UCaaS system that gives you the most proverbial “bang for your buck,” weighing the critical features that you need against the feasible cost of their implementation.
While it’s often good to take a chance, gaining access to the most sought-after features and elements for the lowest-available cost is almost always prudent.
Have you considered the numerous features, capabilities and benefits that come as part of a comprehensive Sangoma UCaaS deployment? Visit www.Sangoma.com or call (877) 344-4861 for more information.
The post How to Deploy or Migrate to UCaaS the Smart Way appeared first on Sangoma.
In an earlier blog, we wrote about how cloud-based Mobile Access Control is disrupting the market at break-neck speed. Nowadays, savvy small and medium-sized businesses (SMBs) have discovered that cloud-based systems can deliver benefits for workplace security just as they do for business phone systems and communication tools. In fact, data shows that nearly 40% of end users have already deployed wireless door locks as part of their access control solution.
There’s no doubt that users today demand seamless digital experiences, and cloud-native mobile access controls certainly help organizations satisfy these needs. Gaining quick access to doors, elevators, and turnstiles with a simple tap or swipe can make a big difference to users. And since most employees already have a smartphone on them at all times, it makes perfect sense. What’s more, administrators can more easily manage frequent activities such as adding or revoking privileges for new and departing employees or providing guest access. It’s a win-win!
Like any technology, however, not all solutions are made equal. To ensure organizations get the most bang for the buck, we’ve put together the top ten things they should look for when considering different solutions. Let’s explore below:
You’re likely already familiar with access control systems. But while legacy systems are cumbersome, requiring lag time to collect and distribute access components, cloud-based systems are quick and efficient, particularly when dealing with multiple job sites. Cloud-native controls are all-inclusive, including wireless door locks, sensors, software and controllers, which lets organizations implement a complete smart office system in one fell swoop.
Frequent tasks like onboarding employees or removing access can be tedious and time-consuming for a system administrator. But, on the other hand, cloud-native access control lets organizations automate processes for greater speed and efficiency. Administrators can also save time and effort with a system that can be controlled from the palm of their hand – from adjusting configurations to reviewing events. If the unlikely scenario occurs where an employee shows up without their phone, administrators can even remotely unlock the doors.
With today’s access control systems, there is no need to purchase technology like key fobs. Instead, employees access their office through a designated mobile app, which works well considering that 93% of the U.S. population currently uses smartphones. Most people these days are already accustomed to conducting business with their personal devices: such as paying bills, sending money or verifying their identity – making it an easy learning curve. Within an app, employees can view the status of each door and swipe a small button to open it. The system then automatically resets the door status and logs the activity through the cloud.
Administrators should be able to effortlessly add users, assign different permissions and access levels, or move access privileges to a different job site. Following installation, they should also enjoy significant flexibility in terms of how their employees enter their job site, as the system can work seamlessly – whether a business chooses to deploy a mobile app or distribute more traditional access control technology like cards or fobs.
Adopting a completely wireless access control system, also referred to as Access Control as a Service (ACaaS), is typically an affordable option for businesses concerned with security. Cost-savings come from the system being quite simple to install and configure – never exceeding four hours! Oftentimes, ACaaS can leverage a business’s existing infrastructure, like using already-installed locks, which eliminates the need for extra wiring and their associated costs.
There is no need to fret about cybersecurity when implementing cloud-native access control. Seek out ACaaS technologies that use encrypted cloud-based technology to authenticate employee devices and activity. The system’s integrity is further enhanced by existing smartphone safety technology like facial and fingerprint recognition. Modern access control solutions are “contactless,” which also contributes to overall workplace safety. These systems reduce the need to touch key cards, fobs and door readers – reducing the spread of contagious contaminants.
You’ve probably already realized that ACaaS is flexible enough to work well for any operation, even with pre-existing security setups. Whether you or your customer wants a fully wireless ACaaS solution or would prefer to seamlessly build upon previously installed infrastructure, it’s all possible with modern access control.
Administering modern access control technology should be done independent of location and from any connected device. Talk about the perfect option for the hybrid workforce! Whether in-office, at home or on the road, admins should have complete visibility into their system and total control over who’s accessing the office – all from their mobile phone.
Having an access control system that integrates with a user directory is a must. It allows for the intuitive configuration of user permissions, the establishment of groups and enables automatic provisioning via emails sent to users’ mobile devices.
Sangoma SmartOffice brings the best of advanced access control to users’ smartphones. Nearly every business has security considerations; yet many lack advanced access control. Employees may have a key or fob to get into the building, but this falls far short from the efficiency, security and affordability of Sangoma SmartOffice
.
Managed entirely through the convenience of the smartphone, SmartOffice operates over Wi-Fi or cellular connections to physically open locks. And since it is integrated with the Sangoma unified communications (UC) system, administrators can access user directory information to easily manage users, groups, and permissions for office access and the PBX from any location.
Sangoma Cloud service powers SmartOffice, ensuring that every user and every action is properly authenticated within the system. Seeing as the solution also uses employee smartphones, SmartOffice
integrates common mobile security features like password protection. These security protections work together seamlessly to guarantee only certified persons have access to the premises.
Finally, as businesses grow, so too will SmartOffice. There is no limit to the number of wireless doors or mobile users that can be included. No matter how a company develops over time, SmartOffice
can remain the foundation of its operational security.
Want to explore improving your company’s security with SmartOffice? Contact us today!
The post 10 Things Organizations Should Look for in Cloud-based Mobile Access Control appeared first on Sangoma.
In my last blog, I discussed “digital channels”, one of which are collaboration tools. Let’s take a closer look the importance of collaboration tools. Just take a look at this graphic.
We’ve all been using collaboration and messaging tools as we’ve been working remotely. Or maybe we’ve been using them anyway since we work in a distributed workforce. For sure, I can say productivity is enhanced by messaging someone, and sharing documents.
Frost and Sullivan feels the same way. As the whitepaper says: “Communications provide the connective tissue among diversified workforces—across departments, project teams, job roles, work locations and worker generations. Enabling a collaborative work environment can considerably strengthen employee bonds, enhance morale, improve the customer journey and drive workflow efficiencies.
And this is why Sangoma has invested quite a bit in our own collaboration platform, TeamHub, which is part of our UC platforms. While an essential part of UC is certainly the smartphone and laptop business phone number mobility aspect, it is becoming increasingly apparent that collaboration and video meetings also now need to be part of the essential to UC offering. And if you are getting these from different vendors, you don’t truly have an integrated UC platform.
To download the full Frost and Sullivan whitepaper to learn more about this, please go here.
The post Why Collaboration Tools Are Important appeared first on Sangoma.
There’s no doubt video conferencing and remote working have garnered a lot of attention over the last few years. Despite this, SIP-based voice services are still critical for businesses that are responsible for high volumes of call traffic. And because the SIP protocol enables internet-based telephony services to integrate with data networks and cloud-based services, businesses can build a more robust unified communications system.
Wholesale SIP trunks are scalable and configurable on-demand connections for MSPs and ITSPs that wish to provide or sell hassle-free, high-value services to employees and customers. What’s more, those who manage communications for call centers and enterprise businesses can work directly with a SIP Trunk Provider or an MSP/ITSP to keep their organizations running seamlessly as they change or grow.
Here are just a few of the benefits of Wholesale SIP Trunking and what you should look for when selecting a vendor:
If you’re responsible for procuring carrier services, wholesale SIP trunking provides freedom of choice so you can curate your offering, unlike retail SIP trunking with bundled offerings. SIP-related services can be selected à la carte and packaged together to deliver SIP services across geographically dispersed locations.
Look for solutions with a single point of entry with wide coverage. We can’t stress this enough! Plus, avoid providers that require large minimum spending, carrier negotiations, or contracts to maintain. Flexibility enables you to deliver or achieve a high-performance unified communications network that creates value for businesses.
Whether you’re an enterprise customer, ITSP or MSP, seek out a complete set of SIP trunking features to meet your requirements.
At the end of the day, flexibility and ease of use are worth their weight in gold. When selecting a vendor for wholesale SIP trunking services, ensure their solutions allow you to instantly take control of provisioning SIP trunks; adding, deleting or modifying services; porting local numbers; and managing billing and account changes.
What’s more, look for vendors offering back-office access via a self-service portal or API so you can easily respond to shifting business requirements, adapt to new market conditions, and capitalize on new opportunities.
Large enterprises use wholesale SIP trunking for their own purposes unlike ITSPs and MSPs that need to create offerings to sell to retail customers. MSPs should choose a solution they can trust to be ready on time and function with minimal disruption and risk. Having completed hundreds of deployments, Sangoma turnkey solutions and processes have proven to be reliable.
Turnkey solutions will include all the service offerings the provider chooses, as well as hosted billing, wrapped together – fully integrated, complete, and ready to operate.
The key benefits of turnkey solutions are:
Our wholesale SIP trunking team is focused on client satisfaction and helping you achieve your business goals. We respond quickly, give you professional advice, and ensure that every interaction with us is a great experience.
Are you in an urgent situation? We can help you get up and running in under a day. Here’s a case study of a one-day turnup.
If you have questions or need additional information, we have an extensive base of resources that are accessible to you, including subjects such as the Modern Approach to SIP trunking or why we’re The Leading Wholesale SIP Trunking Provider.
The post Envision it. Enable it. Control it. Wholesale SIP Trunking Lets Businesses Work Their Way appeared first on Sangoma.
Balázs Kreith of the open-source WebRTC monitoring project, ObserveRTC shows how to calculate WebRTC latency - aka Round Trip Time (RTT) - in p2p scenarios and end-to-end across one or more with SFUs. WebRTC's getStats provides relatively easy access to RTT values, bu using those values in a real-world environment for accurate results is more difficult. He provides a step-by-step guide using some simple Docke examples that compute end-to-end RTT with a single SFU and in cascaded SFU environments.
The post Calculating True End-to-End RTT (Balázs Kreith) appeared first on webrtcHacks.
Frost and Sullivan has written a white paper called “Modern Cloud Communications Empower the Hybrid Workforce”. It’s an interesting paper all around. To download the full paper, please go here. In the meantime, I picked 4 interesting parts of the paper to write a little bit about and I’ll do that in the next four blogs I write.
So, let’s get to the first topic. One concept Frost and Sullivan writes about is the impact of the pandemic on IT and what IT must do in terms of supporting employees going forward. Obviously (or maybe not so obviously depending on your point of view), a hybrid work remote / work at the office is here to stay.
How do you function as the best “team” in this kind of environment? By now, we’ve all gotten used to video meetings and collaboration messaging tools. And continuing those are important for sure. But Frost and Sullivan argues that there needs to be “more”, and integrating digital channels to deal with not only remote employees, but customers (who now have also been exposed to messaging tools) wanting to connect with you digitally is critical for success.
The paper goes into this in much more detail, but the takeaway I have from reading this is collaboration tools, and other types of “digital channels” that work both with employees and reach out to customers will be critical. Like I said above, to download the full paper, please go here.
The post Your UC / UCaaS Platform Needs “Digital Channels” appeared first on Sangoma.
Before discussing the benefits of a managed security service for your business, let’s discuss what threats we are talking about. As an IT manager for your business, you are likely worried about attacks on your corporate network in the form of viruses, spam, phishing attempts, malicious content, unapproved website access where “bad” files can be accessed, etc.
So how to do something about this and manage this better? Maybe you can tackle different parts of this by adding virus checkers or spam filtering or defending against network attacks, for example. But that tactic, while addressing specific areas, that tactic would leave some vulnerabilities, and you’d have to manage each area yourself anyway. It might become complicated, and you wouldn’t have everything covered. You’d cover what you thought was necessary or what you had the budget to cover.
So how can you best tackle addressing all kinds of threats to your network economically? That’s how Unified Threat Management came to be – a single device to be put in place that inspects the incoming data – looks at packet headers, looks for known virus/malware, etc., and identifies these threats to an administrator.
And that’s great. But what if you are a larger company with many locations. It starts to get even more complicated. The next step would be to get help – have a managed service help you do this, or maybe even manage this for you. While a unified threat management device might be put at your location, or if you have multiple locations placed at each location, you could have someone manage this for you. Or you could decide you don’t want any devices on-prem, and all of this can also be done via the cloud.
Either way, someone could help you manage this. An expert team of people would work with you to set up profiles, help you figure out whether an all-cloud solution could work for you, and potentially fully manage your threat management system or co-manage it with your team. And Sangoma, through our acquisition of NetFortris, provides managed security to companies of all sizes.
The post The Benefits of a Managed Security / Unified Threat Management Service appeared first on Sangoma.
A full review and guide to all of the Jitsi Meet-related projects, services, and development options including self-install, using meet.jit.si, 8x8.vc, Jitsi as a Service (JaaS), the External iFrame API, lib-jitsi-meet, and the Jitsi React libraries among others.
The post The Ultimate Guide to Jitisi Meet and JaaS appeared first on webrtcHacks.
It has been a while since our first release of end-to-end encryption for the web app and ever since we have tried to enhance and improve it. One of these enhancements was the introduction of The Double Ratchet Algorithm through libolm and automatic key negotiation.
Each participant has a randomly generated key which is used to encrypt the media. The key is distributed with other participants (so they can decrypt the media) via an E2EE channel which is established with Olm (using XMPP MUC private messages). You can read more about it in our whitepaper.
Even though the actual encryption/decryption API is different on web and mobile (“Insertable Streams” vs native Encryptors/Decryptors), the key exchange mechanism seemed like something that could be kept consistent between the two (even three, considering Android and iOS different) platforms. This took us to the next challenge: how can we reuse the JS web implementation of the double ratchet algorithm without any major changes, while also keeping in mind the performance implications it might have on the mobile apps.
Since our mobile apps are based on React Native the obvious solution was to wrap libolm so we could use the same code as on the web, but not all wrappers are created equal.
There are three major drawbacks while using this approach:
The first issue might not have had such a major impact on this specific use case, since the key exchange happens not too frequently. The fact that every change has to be implemented twice is very likely to be a problem in the future, while the last issue, the asynchronicity of the bridge methods is definitely a showstopper since it would break the consistency of the web and mobile interfaces.
JavaScript Interface (JSI) is a new layer between the JavaScript engine and the C++ layer that provides means of communication between the JS code and the native C++ code in React Native. Since it doesn’t require serialization is a lot faster than the traditional bridge approach, in addition to allowing us to provide a performant sync API.
As we’ll show in the what follows, it also solves the other two problems the classical approach poses, the implementation has to be done/modified only once (most of the time, since some glue code is still required) and, most importantly, the native methods called thought JSI can be synchronous.
The first challenge was to find the proper way of initializing the C++ libraries and exposing the so-called “host functions” (these are C++ functions callable from the JS code).
For this we took advantage of the mechanism for native modules and the way they are initialized by the RN framework, thus creating OlmModule.java and OlmPackage.java. OlmPackage is just a simple ReactPackage that has as native modules OlmModule.
Within the lifecycle of this ReactContextBaseJavaModule, the actual magic happens: loading the C++ libraries and exposing the necessary behavior to the JS side.
The C++ library is loaded inside a static initializer.
Exposing the host functions to the JS is done in the initialize method of the OlmModule, through the JNI native function nativeInstall. This method is implemented in cpp-adapter.cpp, where, besides some JNI-specific code, the jsiadapter::install is called, where the host functions will actually be exposed. It is here where the Android-specific glue code ends, the jsiadapter being platform agnostic, used, as we’ll show, by the iOS as well.
We also used the iOS native bridge mechanism for initialization, but here the implementation is even easier: Olm.h and Olm.mm contain the module, where, in the setBridge method, jsiadapter::install is called, exposing the host functions.
As stated above, both Android and iOS specific code ends up calling the platform agnostic jsiadapter::install method. It is here where the C++ methods are exposed, i.e. JS objects are set on jsiRuntime.global with methods that call directly into the C++ code.
Object module = Object(jsiRuntime); //…add methods to module jsiRuntime.global().setProperty(jsiRuntime, "_olm", move(module));
This object will be accessible on the JS side via a global variable. For our use case only one object is enough, but it is here where as many objects as necessary can be exposed, without having to change any of the platform specific code.
auto createOlmAccount = Function::createFromHostFunction( jsiRuntime, PropNameID::forAscii(jsiRuntime, "createOlmAccount"), 0, [](Runtime &runtime, const Value &thisValue, const Value *arguments, size_t count) -> Value { auto acountHostObject = AccountHostObject(&runtime); auto accountJsiObject = acountHostObject.asJsiObject(); return move(accountJsiObject); }); module.setProperty(jsiRuntime, "createOlmAccount", move(createOlmAccount)); auto createOlmSession = Function::createFromHostFunction( jsiRuntime, PropNameID::forAscii(jsiRuntime, "createOlmSession"), 0, [](Runtime &runtime, const Value &thisValue, const Value *arguments, size_t count) -> Value { auto sessionHostObject = SessionHostObject(&runtime); auto sessionJsiObject = sessionHostObject.asJsiObject(); return move(sessionJsiObject); }); module.setProperty(jsiRuntime, "createOlmSession", move(createOlmSession));
Two methods are exposed: createOlmAccount and createOlmSession, both of them returning HostObjects.
It’s a C++ object that can be registered with the JS runtime, i.e. exposed methods can be called from the JS code, but it can also be passed back and forth between the JS and C++ while still remaining a fully operational C++ object.
For our use case, the AccountHostObject and SessionHostObject are wrappers over the native olm specific objects OlmAccount and OlmSession and they contain methods that can be called for the JS code (identity_keys, generate_one_time_keys, one_time_keys etc. for AccountHostObject, create_outbound, create_inbound, encrypt, decrypt etc. for SessionHostObject).
The way this methods are exposed from C++ to JS is again through host functions, in the HostObject::get method:
Value SessionHostObject::get(Runtime &rt, const PropNameID &sym) { if (methodName == "create_outbound") { return Function::createFromHostFunction( *runtime, PropNameID::forAscii(*runtime, "create_outbound"), 0, [](Runtime &runtime, const Value &thisValue, const Value *arguments, size_t count) -> Value { auto sessionJsiObject = thisValue.asObject(runtime); auto sessionHostObject = sessionJsiObject.getHostObject<SessionHostObject>(runtime).get(); auto accountJsiObject = arguments[0].asObject(runtime); auto accountHostObject = accountJsiObject.getHostObject<AccountHostObject>(runtime).get(); auto identityKey = arguments[1].asString(runtime).utf8(runtime); auto oneTimeKey = arguments[2].asString(runtime).utf8(runtime); sessionHostObject->createOutbound(accountHostObject->getOlmAccount(), identityKey, oneTimeKey); return Value(true); }); } }
Example:
const olmAccount = global._olm.createOlmAccount(); const olmSession = global._olm.createOlmSession(); olmSession.create_outbound(olmAccount, “someIdentityKey”, “someOneTimeKey”);
As shown, global._olm.createOlmAccount() and global._olm.createOlmSession() will return a HostObject. When calling any method on it (create_outbound in the example) the HostObject::get method will be called with the proper parameters, i.e. the Runtime and the method name, so we use this method name to expose the desired behavior.
Note that the calling HostObject can be fully reconstructed on the C++ side,
auto sessionJsiObject = thisValue.asObject(runtime); auto sessionHostObject = sessionJsiObject.getHostObject<SessionHostObject>(runtime).get();
Parameters can also be passed from JS to C++, including other HostObjects:
auto accountJsiObject = arguments[0].asObject(runtime); auto accountHostObject = accountJsiObject.getHostObject<AccountHostObject>(runtime).get(); auto identityKey = arguments[1].asString(runtime).utf8(runtime); auto oneTimeKey = arguments[2].asString(runtime).utf8(runtime);
As mentioned from the very beginning, keeping the web and mobile interfaces consistent was the main goal, so, after implementing all the necessary JSI functionality, it was all wrapped into some nice TypeScript classes: Account and Session.
Their usages are shown in the example integration that comes with the SDK:
const olmAccount = new Olm.Account(); olmAccount.create(); const identityKeys = olmAccount.identity_keys(); const olmSession = new Olm.Session(); olmSession.create(); olmSession.create_outbound(olmAccount, idKey, otKey);
This is the exact same API that the olm JS package exposes. Mission accomplished!
Implementing this RN library that exposes the libolm functionality is just a piece of the bigger mobile E2EE puzzle. It will be integrated in the Jitsi Meet app and used for the implementation of the E2EE communication channel between each participant, i.e. for exchanging the keys.
Since the WebCrypto API is not available in RN, we have to expose a subset of the methods for key generation (importing, deriving, generating random bytes) and again we plan to do it through JSI.
Turn out the olm library contains these methods, so it is possible we’ll expose them in the react-native-olm library.
WebRTC provides a simple API that allows us to obtain the same result that we do on the web with “insertable streams”: FrameEncryptorInterface and FrameDecryptorInterface, in the C++ layer.
The encryptor is to be set on an RTPSender, while the decryptor on the RTPReceiver and they will basically act just as a proxy for each frame that is sent/received, making it possible to add logic for constructing/deconstructing the SFrame out of each frame that is sent/received.
The fact that this code will run on the native side is of major importance, since the performance issues caused by the communication between the JS and native would be major in this case, since those operations would have to be done many times a second, for each frame, probably making the audio and video streams incoherent.
The only operations that will be done from the JS side is the enabling of the E2EE, as well as the key exchange steps. We will have to expose the methods for setting the keys for the AES-GCM from the JS to the native FrameEncryptors and FrameDecryptors, most likely using the JSI path.
While we were busy working on this the good folks over at Matrix have created* vodozemac, a new libolm implementation in Rust and it highly recommends migrating to this SDK going forward. At the moment it only provides bindings for JS and Python, while the C++ is still in progress. We’ll keep a close eye here and update to vodozemac after we have all the pieces in place.
You can start tinkering with it today, here is the GitHub repo.
Your personal meetings team.
Author: Titus Moldovan
The post A stepping stone towards end-to-end encryption on mobile appeared first on Jitsi.
A very detailed look at the WebRTC implementations of Google Meet and Google Duo and how they compare using webrtc-internals and some reverse engineering.
The post Meet vs. Duo – 2 faces of Google’s WebRTC appeared first on webrtcHacks.
Back in 2018 we first released cascaded bridges based on geo-location on meet.jit.si. Then in 2020 as we struggled to scale the service to handle the increased traffic that came with the pandemic we had to disable it because of the load on the infrastructure. And now it’s finally back stronger and better!
In this post we’ll go over how and why we use cascaded bridges with geo-location, how the new system is architectured, and the experiment we ran to evaluate the new system.
We want to use geolocation for the usual reason – connect users to a nearby server to optimize the connection quality. But with multiple users in a conference the problem becomes more complex. When we have users in different geographic locations, using a single media server is not sufficient. Suppose there are some participants in the US and some in Australia. If you place the server in Australia, the US participants will have a high latency to the server and an even higher latency between each other – their media is routed from the US to AU and back to the US! Conversely if you place the server in the US the Australian participants have the same issues.
We can solve this by using multiple servers and having participants connect to a nearby server. The servers forward the media to each other. This way the “next hop” latency is lower, and so is the end-to-end latency for nearby endpoints.
There are many new things in our backend architecture!
We used to have shards consisting of a “signaling node” and a group of JVB (jitsi-videobridge, our media server) instances. In order to make bridges in different regions available for selection, we just interconnected all bridges in all shards. And this is exactly what broke when we had to scale to 50+ shards and 2000+ JVBs.
In the new architecture JVBs are no longer associated with a specific shard. A “shard” now consists of just a signaling node (running jicofo, prosody and nginx). We have a few of these per region, depending on the amount of traffic we expect. Independently, we have pools of JVBs, one pool in each region, which automatically scale up and down to match the current requirements.
In addition we have “remote” pools. These are pools of JVBs which make themselves available to shards in remote regions (but not in their local region). For example, we have a remote pool in us-east which connects to signaling nodes in all other regions. This separation of “local” vs “remote” pools is what allows us to scale the infrastructure without the number of cross-connections growing too much
As an example, in the us-east region (Ashburn) we have 6 signaling nodes (“shards”) and a pool of JVBs available to them. This is the us-east “local” pool. We also have multiple “remote” JVB pools connected to the shards — one from each of the other regions (us-west, eu-central, eu-west, ap-south, ap-northeast, ap-southeast). Finally, we have a us-east “remote” JVB pool connected to shards in all other regions.
In late 2021 we completely replaced the COLIBRI protocol used for communication between jicofo and JVBs. This allowed us to address technical debt, optimize traffic in large conferences, and use the new secure-octo protocol.
In contrast to the old octo protocol, secure-octo connects individual pairs of JVBs. They run ICE/DTLS to establish a connection, and then use standard SRTP to exchange audio/video. This means that a secure VPN between JVBs is no longer required! Also, we can filter out streams which are not needed by the receiving JVB.
In the experiments we ran in 2018 we found that introducing geo-located JVBs had a small but measurable negative effect on round-trip-time between endpoints in certain cases. Notably endpoints in Europe had, on average, a higher RTT when cascading was enabled. We suspected that this was because we use two datacenters in Europe (in Frankfurt and London) and many endpoints have a similar latency to both. In such cases, introducing the extra JVB-to-JVB connection has almost no impact on the next-hop RTT, but increases the end-to-end RTT between endpoints.
To solve this problem we introduced “region groups”, that is we grouped the Frankfurt and London regions, as well as the Ashburn (us-east) and Phoenix (us-west) regions. With region groups, we relax the selection criteria to avoid using multiple JVBs in the same region group.
As an example, when a participant in London joins a conference, we will select a JVB in London for them. Then, if a participant in Berlin (closer to Frankfurt than London) joins we will use that same JVB in London instead of selecting a new one in Frankfurt.
The new meet-jit-si infrastructure allowed us to easily perform an experiment comparing the case of no bridge cascading (control), cascading with no region groups defined (noRG) and cascading with region groups (grouping us-east and us-west, as well as eu-central and eu-west). We had 3 experimental “releases” live for a period of about two weeks, with conferences randomly distributed between them and the main release. We measured two things: end-to-end round trip time between endpoints and round-trip-time between an endpoint and the JVB it’s connected to (next-hop RTT).
By and large the results show that cascading works as designed and the introduction of region groups had the desired effect.
With cascading we see significantly lower end-to-end RTT in most cases. When the two endpoint are in the same region:
When the two endpoints are in different regions we see a slight increase when region groups are used, but the overall effect of cascading is positive.
The next-hop RTT is also significantly reduced with cascading. Overall we see a 29% decrease (from 223 to 158 milliseconds) when the endpoint and server are (were) on different continents.
You can see the full results here.
The post Bridge cascading with geo-location is back appeared first on Jitsi.
Before we get to that question, and why I even asked it in the first place, we need to define what hot desking is. Hot desking is going to the office and sitting where you want. Maybe your company doesn’t have assigned offices or cubes, and you go in. And remember, people, if we’re going to live in this hybrid work from home / go into the office 2 or 3 times a week, companies will start doing this. Count on it. And if you don’t like it, go in every day to claim your space. The company can take a smaller footprint at the physical location since everyone isn’t there every day. So you sit wherever. But you need your office phone number, so when you get calls on the phone at the physical location you sit at, they are phone calls for you.
In terms of communications, which I write about, this means taking your phone number to wherever you go sit. And if there is a physical phone at the place you sit, that means you need to log in somehow and tell the phone system that you are at that place and that the physical phone at that place should be using my company phone extension.
Putting this feature into a Unified Communication system is not the simplest thing. We have multiple Unified Communication systems, and some support hot desking, and some do not. If an RFP comes in with hot desking requirements, we can respond positively with one of our systems.
But is hot desking of a physical phone an obsolete concept with Unified Communications? Does it even need to be in RFPs anymore? Because with UC, you can make and take phone calls with your work extension from your computer, or your smartphone, via a UC client. I can do that no matter where I sit in the office, from any hotel room, from my patio at my house, or from the Giants game (which I have done). So there is built-in hot desking with modern Unified Communications systems. But if you still need it for physical phones, we can do that too.
The post Is telephone Hot Desking really needed anymore? appeared first on Sangoma.
Step-by-step guide on how to fix bad webcam lighting in your WebRTC app with standard JavaScript API's for camera exposure or natively with uvc drivers.
The post Fix Bad Lighting with JavaScript Webcam Exposure Controls (Sebastian Schmid) appeared first on webrtcHacks.
Before we talk about the benefits of a managed SD-WAN service, let’s talk about what SD-WAN is. In my last blog, we discussed a Managed Internet Service, whereby a company like Sangoma could manage the internet connections to all of the buildings in an enterprise. And these internet connections could take various forms such as cable, DSL, fiber, wireless, etc.
SD-WAN leverages these different transport mechanisms so that enterprises can connect securely to the cloud applications that the enterprise is running. Sure, a VPN could do that but a VPN manages the connection, and would not transfer the IP address to another of the transport mechanisms in case of a failure. As such, SD-WAN benefits over VPN, or MPLS, or some other dedicated connections mechanism is utilization of different types of transport mechanisms, centralized management for the enterprise, higher bandwidth given aggregations of links, and typically reduction in cost of your networks.
Enterprises may choose to run their own SD-WAN services, or may choose to go to a managed services provider to offer managed SD-WAN to the enterprise. Why would an enterprise do that? What are the advantages to such an arrangement?
Well, first, there is an expert team of people to fully manage your SD-WAN, or co-manage it with your team. And Sangoma, through our acquisition of NetFortris, provides Managed SD-WAN Services to companies of all sizes. We are not just a reseller of an SD-WAN solution as we have our own international SD-WAN backbone with 40+ PoPs and growing. And our cloud UC platform is also on that network, for a seamless experience. We also host and manage all headend and cloud orchestration components for your deployment.
The post The Benefits of a Managed SD-WAN Service appeared first on Sangoma.
On January 6th 2021, the deadline for RAY BAUMs act compliance for fixed Interconnected VoIP, Multi-line Telephone systems, and Telephone Relay Services had hit. Requiring “dispatchable location” to be shared when making a 911 call from fixed services, or services with a physical address associated with it (e.g. wired phones).
Now, a year later, on January 6th 2022, we reached the deadline for RAY BAUM’s for non-fixed services. Non-fixed just refers to service that is not tied to a physical location (also known as “nomadic” or “mobile” devices) that can be readily moved by the user to multiple locations and/or be used while in motion. This would include things like softphones on laptops/mobile phones.
The goal of the act is to provide specific location information for the emergency services personnel to allow them to quickly locate the caller. There are ultimately 2 components to a dispatchable location:
Practically, the information sent should be the most specific information that you can give as to the caller’s whereabouts. In a building with marked rooms, that would mean a room number, but if a large building has unmarked rooms, the floor should be sent at the very least.
While the deadline for fixed devices has already past a year ago, we’ll discuss those as well. There are a few different options in terms of compliance for fixed devices. Since fixed devices won’t change their location, it’s much easier to handle compliance on them (hence, the faster deadline for them).
Using traditional E-911 service, a fixed device can have a specific ANI assigned to it that has that phone’s specific location registered to it. For example, one device would have 412-555-1111 registered as 123 Sesame Street Suite 101 and another device would use 412-555-2222 registered to 123 Sesame Street Suite 202. This is the simplest way of handling compliance under the limitations of traditional E-911 service.
However, traditional E-911 service will not work for non-fixed devices since it’s not typically possible to update registered addresses in real time. For non-fixed devices, you will need to utilize a Dynamic Location Service. Fixed devices can certainly benefit from a Dynamic Location service as well, as it would allow you to manage a single 911 phone number instead of assigning a different number to each possible fixed line.
Dynamic Location Routing utilizes technology like PIDF-LO (Presence Information Data Format – Location Object) to allow for location data to be created and sent at the time a call is made. This means instead of assigning a location to a phone number before the call is made, the device or PBX can send its location at the time of the call either based off input from the user/administrator or using GPS location information.
For non-fixed devices, this could be used to send the devices GPS coordinates at the time the call is made, or it allows the user to keep updated address information as they move the device around (e.g. between work and home). This could even theoretically be used to update the devices’ location within a building.
Dynamic Location Routing isn’t just useful for non-fixed devices; a PBX can use dynamic location routing to fill in the in-building location of the specific device making the call. In a multi-line telephone system utilizing traditional E-911 service, you need to manage different numbers for each different location (room/suite/floor/etc.) that you have. However, with dynamic location routing, a PBX could instead use the same number for any device making a 911 call and simply append the sublocation data based off the extension making the call.
Looking back to our previous example, instead of needing 412-555-1111, 412-555-2222, and 412-555-3333 for suites 101, 202, and 303 respectively, the administrator can simply use 412-555-1111 and have the PBX use PIDF-LO to send the Suite # based off the phone that is making the call.
Currently, there are not many PBXs that inherently support PIDF-LO, but the teams at Sangoma are working on implementing the feature in most Sangoma PBXs. This is likely to become a staple feature of VoIP devices moving forward and will allow for better management and compliance for your multi-line telephone systems.
VI Communication Services currently offers dynamic location routing with our E-911 service. If you currently have our E-911 service, documentation for the feature can be found in our knowledge base. If you don’t use our E-911 service or have additional questions about it, please contact us.
The post RAY BAUM’s Act & 911 Dynamic Location Routing appeared first on Sangoma.
We all need the Internet. Many of us can’t do our jobs without that access. And for long-time readers of this blog who have read many things from me about UC and UCaaS, those systems can not operate without access to the Internet.
And for larger companies, with either a few or even a slew of different offices, there is a hodge-podge of Internet access. Someone at the site figures out how to buy it, and they buy it. It may not be the best price, but maybe it was easy to do. And the bills are paid somehow. And the access works, or it doesn’t. And whoever bought it has to deal with the ‘issues’ of slow Internet or the Internet being down. And at this point, it’s impacting business. And that is not good. So, what to do?
Many companies now turn to Managed Internet Providers to obtain the Internet that best suits each location and proactively manage the uptime to ensure your business has connectivity.
What types of access are out there? Fiber, Cable, T1, 4G, and Ethernet over copper are some of them. And each of them may make the most sense for different physical locations. Maybe you don’t need fiber at each location. You don’t need one size fits all for your entire company, and you need what’s suitable for your company.
If this sounds interesting, let’s go through some of the benefits of using a Managed Internet Provider. As indicated above, the provider obtains the best type of access for each site, gets the best price, and does QoS configuration. So the provider would set up your network for your whole company.
The provider would also perform up/down circuit monitoring and alerting and be there for support so that the provider would manage and maintain it. The provider could also provide network backup, so you wouldn’t have to worry about that either.
And the company would get one single unified bill from the Managed Internet Provider, so you wouldn’t have to sort through all kinds of invoices that you don’t understand.
Through our acquisition of NetFortris, Sangoma provides Managed Internet Services to companies of all sizes. In short, we build a customized solution to provide the best connectivity, security, and reliability options for your organization.
The post The Benefits of a Managed Internet Service appeared first on Sangoma.
Virtual / hybrid meetings are part of our everyday life now. Some meet in their home office, kitchen, slouching in the couch, while taking a walk, or even while driving! We won’t encourage you to have meetings while driving since any distraction can be fatal, but we know many are doing it and decided to implement a distraction free mode for those who choose to have a meeting while in the car.
On the latest Jitsi Meet beta version (22.2.0) you will notice a new button in the drawer: Car Mode. This will open the car mode screen, a brand new in-meeting experience which has the basic meeting controls like ending the meeting, selecting sound device and microphone muting, but with enhanced sizes so you can easily use them without much distraction.
Car mode also saves you bandwidth as it disables all incoming/outgoing video streams, for a distraction free meeting experience. Another useful feature in car mode is push-to-talk: simply long-press the un-mute button to keep the microphone active and release it to automatically mute again. Push to talk is especially useful for passengers in the car, or when joining a conference while having a stroll. That’s right, you don’t need to be in a car to use this feature!
Those of you with Apple CarPlay enabled vehicles may have noticed the car did not show up in the sound devices selection and this created some confusion. This has been fixed and you’ll now see an entry in the audio device selection drawer:
This first iteration of the feature only opens the car mode when selected from the toolbar button. We are planning to add automatic detection for the user being in a car and automatically offering to switch to this mode in that case.
Android users: worry not, better Android Auto integration will come!
Your personal meetings team.
Author: Horatiu Muresan
The post Introducing Car Mode! appeared first on Jitsi.
As many of you know by now, Sangoma has purchased NetFortris, and now we have added some new “Managed Services” offerings. What are managed services? It’s no surprise that IT is getting more and more complex. As an IT manager, you have cybersecurity concerns; there are many ways to get connectivity. Your company wants to run all kinds of cloud applications, and now it’s even hard to get the required hardware you need.
As an SMB or even medium-sized business, it’s hard to keep up. Add onto that you may have outdated/old systems that you constantly have to keep adding “patches” on. And add onto that the ability (or inability) to get enough IT staff. It’s just too much, and as an IT manager, you cannot support the business. As such, another model was born. This model is outsourced IT, offering managed services in an MRR or ARR pricing model.
And there are all kinds of managed services being offered. You might already have UCaaS, which is a managed service. And you get the benefits of not having to worry about upgrades and the service provider taking care of everything for you. So why not add more? As I said, there are all kinds – from device management and tracking to IoT analysis, security, storage, and disaster recovery.
And so we get back to the NetFortris acquisition. With this, Sangoma adds the following managed services, in addition to the ones we already have:
Managed Internet includes mixing and matching connectivity types by location and provides network monitoring, analytics, up/down circuit monitoring, and wireless backup.
Managed SD-WAN includes:
Managed Security, or unified threat management, for all traffic into the business network. This service helps protect customers against attacks and losses from spam, viruses, ransomware, botnets, etc.
In the following few blogs, I will go into more detail on each new managed service.
The post What is MSP and UCaaS all about? appeared first on Sangoma.
Tsahi Levent-Levi, also known as BlogGeek.me, has established himself as, arguably the most prominent WebRTC analyst. He has been commenting on the industry for the past decade, while also training WebRTC professionals, co-running the testRTC business and that’s without even getting into his prior real-time comms experience with Amdocs and Radvision. We asked Tsahi if he could share his thoughts on RTC CPaaS and specifically how Jitsi as a Service fits into the space. The following is what he wrote. Enjoy!
In the past two years I’ve seen many service providers and enterprises run their own video meetings based on Jitsi. It is one of the easiest ways to get video meetings implemented today – with all the bells and whistles.
After 8×8’s acquisition of Jitsi, the Jitsi team has been hard at work on 4 different tracks:
The introduction of JaaS is a very interesting angle to Jitsi, especially when coupled in with the open source project itself.
Look at today’s video API solutions – the CPaaS vendors who happen to offer video APIs so you can develop your own communication applications. What you’ll find is great platforms for developing your services, with the small caveat of being vendor-locked. The APIs of these vendors are specific to them. Any decision to switch from one vendor to another would necessitate a rewrite of the code communication aspects of the application. And they offer no open source alternative of their own – one where you can just install, host and maintain your environment on your infrastructure of their technology.
That’s not something bad or new. It is how money is made in many of the cloud API vendors across all sectors of the software industry.
In recent years, we have seen a shift and focus in lowcode solutions with video APIs.
Video APIs were mostly about publish and subscribe. You could publish your microphone, webcam and/or screen, and subscribe to other publisher’s content. While this approach is quite flexible, it is also daunting and prone to performance issues. As they say, with great power comes great responsibility, only that the responsibility here was shifted towards the developers using the APIs.
Now, it is understood that this approach can’t scale and a different approach is needed. One such approach is to reduce the complexity by offering a higher level abstraction that handle the complexities on their own. These can come in the form of a reference application, new API set or a UI widget that can be embedded into applications.
Jitsi was similar in a way. It always offered a video bridge (the open source media server), but also the Jitsi Meet experience – a complete implementation of video meetings provided in open source and as a hosted end-user service.
Enter JaaS – Jitsi as a Service
Jitsi officially announced JaaS in January 2021. It added another layer to the Jitsi story – one of CPaaS and Video APIs.
With the Jitsi ecosystem, you now had 3 different ways to make use of Jitsi:
This reminds me of the way WordPress works – you could take the WordPress framework, install, host and maintain it on your own – or you can use Automattic’s or other vendors managed hosting service for WordPress, removing a lot of that headaches and focusing on what’s important to you – the actual site content.
With Jitsi, you can decide to run and host everything on your own or use JaaS, having the Jitsi team manage and host it for you, removing a lot of the infrastructure headaches.
What I love most about this is the dogfooding part. This isn’t only about taking an open source project and making Video APIs out of it. Jitsi Meet is a managed service that has seen its own growing pains as it needed to scale in recent years. A lot of the work done in the Jitsi codebase and the DevOps scripting around it comes directly from end users who communicate and complain directly to the Jitsi team.
As if this weren’t enough, in November 2021, Jitsi announced a kind of an a la carte hosting offering. As that announcement states:
We are currently working on a service that would let Jitsi users easily connect their self-hosted Jitsi deployments to 8×8’s PSTN, LiveStreaming, Recording and transcriptions clouds.
This means that developers can now run their own Jitsi deployment, but then connect it to managed “features” of JaaS on a case by case basis. Want to have PSTN? Push a meeting to a live stream on YouTube? Record a session? Transcribe it? All of these things add complexity to a deployment and can now be abstracted out and “outsourced” to JaaS while maintaining your own hosted Jitsi cluster.
In a way, Jitsi unbundled their JaaS offering, making it simpler to adopt.
From Jitsi, to JaaS through the new a la carte offering, a flexible solution has emerged for developers needing video meeting solutions. One that can be consumed in many different ways.
This reduces the vendor lock-in challenge that many video API vendors have, simply because you are never bound to the JaaS offering in any way other than them offering the best possible service. Not happy? Take your code and host it on your own. No rewriting or vendor migration necessary.
I think this gives developers a very compelling solution that is a kind of a two-way street:
Developers can start by self hosting their own Jitsi Meet service.
This keeps control in the early days as they get acquainted with the platform and its nuances. At some point as they grow, it makes sense to think about a more global and scalable deployment. And one easy way to get there is to simply switch from the self hosted path towards the managed one by using JaaS.
Developers can also start from the managed JaaS solution.
The beauty of it here is that if they are unhappy, or decide they want to do things differently, they can simply install their own Jitsi servers and start maintaining their own infrastructure – without changing the actual application code as they do that.
With Jitsi and JaaS you can move from a self hosted to a managed service or vice versa.
Here’s the thing. If you’re looking for a video meeting service to make your own, and only care about a bit of customization to the video experience itself, then Jitsi is a great solution.
It enables you to go the route of a self hosted open source solution or to go with a fully managed video infrastructure and video APIs approach. All that wrapped with one of the most popular WebRTC media servers on the market.
The post Jitsi as a Service: a two-way street, by Tsahi Levent-Levi appeared first on Jitsi.
Every once in a while I’ll get asked by an end-user customer about how “UCaaS is priced”. I get this question since the end-user sees all kinds of pricing, from under $20/seat to quite a bit over that. It’s really important to understand pricing and what you get with that price, before you pick a vendor.
So, let’s start with that – $20/seat. That means $20 per user per month. And a lot of this kind of pricing comes with 3-year contracts, and over 100 seats. So, when you see pricing under $20 per user per month, you need to understand for how many seats and for the length of time of the contract. And you also need to consider what I’m writing about below. What actually is included in the monthly per user charge?
The second thing to ask is about the desk phones. Are the phones included in the per user pricing or not? They might be part of the per user per month charge, or you might be able to buy the phones outright at a one-time charge, or you might also be able to pay for them per month. And maybe you also want the mobile and/or desktop client. Is that also part of the monthly per user charge as well. Or is that an add-on charge to use the mobile and / or desktop client?
Speaking of voice, are you having to pay for the phone numbers (DIDs)? You need to ask about this as well. Or if you have them already, is there is cost to “port” them to your new UCaaS system?
So, let’s talk about the client. Is it just voice? What about the video part? If video is involved, the UCaaS vendor may be using a 3rd party like Zoom – that likely means yet more additional fees. (Note: Sangoma has its own video meeting service, called Sangoma Meet).
And the client also probably has a collaboration part. The client referenced above may or may not also include collaboration features (ability to instant message and share files and have workstream channels).
I also frequently get asked about maintenance costs. When you buy equipment outright, such as buying a PBX or UC system for your premise, typically you will also have a yearly maintenance charge so you can get software updates, etc. In a cloud system, there should not be a separate maintenance charge. It should be included in the monthly price per user.
There might also be an option for alternative networking. For instance, if your internet connection goes down, do you want the system to be able to switch into a wireless backup? With 4G and 5G, this is possible as well. How much does this cost?
Sangoma offers UCaaS offerings with easy to understand pricing, and a range of feature options as well.
The post How is UCaaS Priced? appeared first on Sangoma.
Here is the formal announcement that the development for the next major version 5.6.0 is now frozen. The focus has to be on testing the master branch.
Also, the master branch should not get commits with new features till the branch 5.6 is created, expected to happen in 2-4 weeks, a matter of how testing goes on. Meanwhile, the commits with new features in the C code can be pushed to personal branches, new pull requests can still be done, but they will be merged after branching 5.6.
Can still be done commits with documentation improvements, enhancements to related tools (e.g., kamctl, kamcmd), merging exiting pull requests at this moment, exporting missing KEMI functions and completing the functionality of the new modules added for 5.6.
Once the branch 5.6 is created, new features can be pushed again to master branch as usual. From that moment, the v5.6.0 should be out very soon, time used for further testing but also preparing the release of packages.
If someone is not sure if a commit brings a new feature, just make a pull request and it can be discussed there on github portal or via sr-dev mailing list.
A summary of what is new in upcoming 5.6 is going to be built at:
Upgrade guidelines will be collected at:
Everyone is more than welcome to contribute to the above wiki pages, especially to the upgrade guidelines, to help everyone else during the migration process from v5.5.x to 5.6.x.
Thanks for flying Kamailio!
The post Development For v5.6.0 Is Frozen first appeared on The Kamailio SIP Server Project.Conner Luzier is a TADHack regular. Check out his hacks from TADHack-mini Orlando in 2018, 2019, and 2020. He’s also presented at Enterprise Connect several times, how many soon to be graduates have such industry exposure!
He’s looking for work, ideally full-time, but is happy for project work to build his references in the industry. Conner has shown his abilities and get-up and go time and again at TADHack. So please get in contact with Conner, thank you.
From TADHack-mini Orlando 2020. TeleQuest (Garrett Curtis, Conner Luzier, Jenn Gibson, Eric Good) – won Apidaze and Intelepeer prizes. TeleQuest is a phone-based adventure game, perfect to keep your social distance while connecting with others.
From TADHack-mini Orlando 2019. SaveMe by Giancarlos Toro, Conner Luzier, Thiago Pereira, Vikki Horn won prizes from Flowroute, Telesign, and VoIP Innovations. It is a secure video reporting app using WebRTC and SMS.
From TADHack-mini Orlando 2018. Polls IO by Conner, Paul, Giancarlos used VoIP Innovations to create a service that allows local government to be be more involved with their constituents and allows their constituents to be more involved with the local government by allow easy opinion polling on new projects, bills, etc. This will also allow the local governments to be able to easily send out updates on new legislature and its progress. This polling service can be generalized to be used for businesses and events. They won the VoIP Innovations prize and t-shirts from Code for Orlando. See their pitch video here, and their slides here, and a video of their demo at Enterprise Connect here.
Here’s Conner at Enterprise Connect in 2019, third from the right.
The post Conner Luzier – seeking full-time or project work in programmable comms / WebRTC appeared first on Blog @ TADHack - Telecom Application Developer Hackathon.
Saúl Ibarra Corretgé of Jitsi walks through his epic struggle getting Apple iOS bitcode building with WebRTC for his Apple Watch app.
The post The WebRTC Bitcode Soap Opera (Saúl Ibarra Corretgé) appeared first on webrtcHacks.
TADHack is the largest global hackathon focused on programmable communications since 2014. This year we are partnering with Network X (Broadband World Forum, 5G World, and Telco Cloud) as the pre-event hackathon. This is similar to what we do before Enterprise Connect with TADHack-mini Orlando in March.
Thank you to STROLID, Symbl.ai, Telnyx, Jambonz, and Subspace for making TADHack possible.
At TADHack Global 2021 Symbl.ai achieved an amazing result, 21 hacks, and Telnyx was even more impressive with 30 hacks; all created over one weekend by developers from around the world.
The locations we anticipate running are: Chicago, Tampa, Colombia, South Africa, Berlin, UK, Sri Lanka, Amsterdam, France, and remote (anywhere in the world). We are adding new locations, e.g. TADHack France (run by Le Voice Lab), and Amsterdam as we are the pre-event hackathon to Network X (Broadband World Forum, 5G World, and Telco Cloud).
We had great success with TADHack Teens in Sri Lanka in 2021 thanks to hSenid Mobile and Ideamart, and plan to expand this initiative to South Africa and the US. We’re training the next generation of programmable communications engineers and entrepreneurs, as well as excellent summer interns!
We have two additional initiatives for 2022:
The TADHack website is still in development, and definitely needs some accessibility improvements Save the date 15-16 Oct 2022, for the largest and longest running global hackathon focused on programmable communications. Thank you.
The post TADHack Global 2022 Launch, Save the Date, 15-16 Oct appeared first on Blog @ TADHack - Telecom Application Developer Hackathon.
Over the years we’ve had many accessibility hacks. Last year we had an excellent hack Colloquia11y by team Similarly Geeky, comprising Lily Madar and Steven Goodwin. Its an accessible conferencing solution (using Text To Speech and Speech To Text).
For TADHack 2022 we’ve created an Accessibility Prize that will be judged by Chris Lewis of Lewis Insight and Manisha Amin, CEO of The Centre for Inclusive Design. Chris and Manisha are also providing some resources to help hackers better understand Accessibility, and how to pitch to a blind person.
I’ve known Chris for several decades He’s been a telecom analyst for 38 years, legally blind for 25 years, and started a focus on accessibility about 7/8 years ago.
Chris shares his real world experience of using the web as a blind person, getting to a video on a page can take 20-100 clicks. Its like accessing a web page through an IVR (Interactive Voice Response), serial access to what a sighted person accesses in parallel. He demos the challenges he faces using the latest TADHack.com website. I’ve got some work to do there!
Chris also shares the challenges in using his fancy coffee machine. It has several error lights, e.g. no water, no beans, grounds tray full, drip tray full. But no way to know which one is lit. So he checks all 4 possible problems each time one of them has an error. Chris provides many wonderful insights into the everyday challenges he faces.
Chris highlights the importance Alexa and Siri play in helping accessibility, for example he needed to know how to spell ‘curfew’. Another challenge is between Zoom, Teams, and many of the other conferencing platforms have different shortcuts, and one of the reasons he’s not yet used Android’s Accessibility tool TalkBack reader is its like learning yet another language, as he already uses iOS and Microsoft’s accessibility tools.
This 18 min interview is a mine of insights on accessibility challenges and ways of thinking about accessibility. More than I cover in the written section of this weblog, so check out the video. For me, the take away on designing for the edge cases, means the center is free is powerful. And when giving your pitch, focus on the story, and avoid saying, ‘as you can see on the slide’
Thank you Chris.
Coming soon.
The post Accessibility Resources for TADHack Global 2022 appeared first on Blog @ TADHack - Telecom Application Developer Hackathon.
On my Asterisk server, I happen to have two on-board ethernet boards. Since I only used one of these, I decided to move my VoIP phone from the local network switch to being connected directly to the Asterisk server.
The main advantage is that this phone, running proprietary software of unknown quality, is no longer available on my general home network. Most importantly though, it no longer has access to the Internet, without my having to firewall it manually.
Here's how I configured everything.
On the server, I started by giving the second network interface a static IP
address in /etc/network/interfaces
:
auto eth1
iface eth1 inet static
address 192.168.2.2
netmask 255.255.255.0
On the VoIP phone itself, I set the static IP address to 192.168.2.3
and
the DNS server to 192.168.2.2
. I then updated the SIP registrar IP address
to 192.168.2.2
.
The DNS server actually refers to an unbound daemon running on the Asterisk server. The only configuration change I had to make was to listen on the second interface and allow the VoIP phone in:
server:
interface: 127.0.0.1
interface: 192.168.2.2
access-control: 0.0.0.0/0 refuse
access-control: 127.0.0.1/32 allow
access-control: 192.168.2.3/32 allow
Finally, I opened the right ports on the server's firewall in
/etc/network/iptables.up.rules
:
-A INPUT -s 192.168.2.3/32 -p udp --dport 5060 -j ACCEPT
-A INPUT -s 192.168.2.3/32 -p tcp --dport 5060 -j ACCEPT
-A INPUT -s 192.168.2.3/32 -p udp --dport 10000:20000 -j ACCEPT
In order for the phone to update its clock automatically using NTP, I installed chrony on the Asterisk server:
apt install chrony
then I configured it to listen on the private network interface and allow access from the VoIP phone by adding the following to /etc/chrony/chrony.conf
:
bindaddress 192.168.2.2
allow 192.168.2.3
Finally, I opened the right firewall port by adding a new rule to /etc/network/iptables.up.rules
:
-A INPUT -s 192.168.2.3 -p udp --dport 123 -j ACCEPT
Now that the VoIP phone is no longer available on the local network, it's not possible to access its admin page. That's a good thing from a security point of view, but it's somewhat inconvenient.
Therefore I put the following in my ~/.ssh/config
to make the admin page
available on http://localhost:8081
after I connect to the Asterisk server
via ssh:
Host asterisk
LocalForward localhost:8081 192.168.2.3:80
Because this local device is not connected to the local network
(192.168.1.0/24
), it's unable to negotiate a direct media connection to
any other local (i.e. one connected to the same Asterisk server) SIP device.
What this means is that while calls might get connected successfully, by
default, there will not be any audio in a call.
In order for the two local SIP devices to be able to hear one another, we
must enforce that all media be routed via Asterisk instead of going directly
from one device to the other. This can be done using the directmedia
directive (formerly
canreinvite
) in
sip.conf
:
[1234]
directmedia=no
where 1234
is the extension of the phone.
The Kamailio v5.5.0 was released about one year ago, therefore it is time to set the milestones for getting 5.6.0 out.
It has been proposed to freeze the development on Thursday, April 14, 2022, test till mid of May or so, then release the next major version 5.6.0.
There is a lot of development to existing components and a couple of new modules.
If anyone wants a different time line towards 5.6.0, let’s discuss on sr-users@lists.kamailio.org mailing list and choose the one that suits most of the developers.
Thanks for flying Kamailio!
The post Freezing The Development For v5.6.0 first appeared on The Kamailio SIP Server Project.Giovanni Tommasini from Evoseed.io published a Github repository with resources about how to deploy Kamailio with TLS in a Docker container using Let’s Encrypt certificates. It can be found at:
It should be a good starting point for anyone wanting to start a Kamailio instance with TLS enabled for secure and encrypted SIP signalling traffic.
Check also Giovanni’s blog post about this project:
We appreciate such contributions to the community, if you write or you are aware of interesting articles about how to deploy and use Kamailio, we are more than happy to publish news about them on kamailio.org website, just notify us about them via sr-users mailing list!
Thanks for flying Kamailio!
The post Docker Container With Kamailio And Let’s Encrypt first appeared on The Kamailio SIP Server Project.Kamailio SIP Server v5.4.8 stable is out – a minor release including fixes in code and documentation since v5.4.7. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.
Kamailio® v5.4.8 is based on the latest source code of GIT branch 5.4 and it represents the latest stable version. We recommend those running previous 5.4.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous releases of the v5.4 branch.
Note that 5.4 is the second last stable branch, still officially maintained by Kamailio project development team. The latest stable branch is 5.5, with v5.5.4 released out of it.
Resources for Kamailio version 5.4.8
Source tarballs are available at:
Detailed changelog:
Download via GIT:
# git clone https://github.com/kamailio/kamailio kamailio # cd kamailio # git checkout -b 5.4 origin/5.4
Relevant notes, binaries and packages will be uploaded at:
Modules’ documentation:
What is new in 5.4.x release series is summarized in the announcement of v5.4.0:
Thanks for flying Kamailio!
The post Kamailio v5.4.8 Released first appeared on The Kamailio SIP Server Project.Hey there Fellow Jitsters!
We’ve got some great news to share: Jitsi has been selected to participate in Google Summer of Code 2022!
We had a several year hiatus, but we are thrilled to be back. GSoC has been a very successful program for us, thanks to it we got tons of new features and several projects, and even some new colleagues!
There is plenty of time to apply as a student, if you are so inclined. Take a quick look at the getting stated guide from Google, pick an idea from our ideas list (or propose your own!) and apply!
Our community is always a great place to discuss project ideas before applying, we’ll welcome you all with open arms.
Let’s make GSoC 2022 our most successful one yet!
Last, but not least, huge thanks to Google for selecting Jitsi to participate in the GSoC program.
The post Jitsi is back at Google Summer of Code appeared first on Jitsi.
Test
Today we are releasing an often requested feature / package from the Jitsi community. We’re happy to announce the availability of the Jitsi Meet React SDK. This new SDK simplifies the integration of the Jitsi Meet External API with applications using React. It features simple React components that allow one to embed the Jitsi Meet experience onto a React based application, with full customization capabilities.
Let’s explore how to use it!
First we’ll create a new project using create-react-app, but you can start with an application you’re already working on, just make sure it’s using React 16 or higher.
create-react-app showcase-jitsi-react-sdk
Next let’s install the SDK as a dependency to access its modules.
npm install @jitsi/react-sdk
In App.js (in the created project) let’s import the first module:
import { JitsiMeeting } from '@jitsi/react-sdk';
We’ll instantiate the JitsiMeeting React component that requires the roomName prop, but keep in mind that you can use other props as well to get more control and enhance your client’s experience.
Let’s use the component in our application.
<JitsiMeeting roomName = { 'YOUR_CUSTOM_ROOM_NAME } // make sure it's a good one! />
The result in your browser should look something like this:
Let’s tweak the styling a bit:
<JitsiMeeting roomName = { 'YOUR_CUSTOM_ROOM_NAME' } getIFrameRef = { node => node.style.height = '800px' } />
Now we’re cooking! Next we could add some config overwrites. Let’s say we’d like our participants to join the meeting with muted audio and make sure of it by hiding the corresponding pre-meeting button as well:
<JitsiMeeting configOverwrite = {{ startWithAudioMuted: true, hiddenPremeetingButtons: ['microphone'] }} roomName = { 'YOUR_CUSTOM_ROOM_NAME' } getIFrameRef = { node => node.style.height = '800px' } />
Done! You can override the same options as you can with the external API, that is, most of these. We also made it possible to add event listeners easily, be sure to checkout the project’s README or our handbook.
This is another component provided by the SDK that’s preconfigured to work with JaaS. You’ll need to generate a JWT and pass an appId, and you’re off to the races. Make sure you read the JaaS console guide too! Here is a simple example:
<JaaSMeeting appId = { 'YOUR_APP_ID' } jwt = { JWT } roomName = { 'YOUR_CUSTOM_ROOM_NAME' } />
With this SDK integrating meetings in React applications should be as simple as it gets! If you happen to come across any issues you can reach out to us in the GitHub issue tracker or our community.
, Your personal Meetings team.
The post Introducing the Jitsi Meet React SDK appeared first on Jitsi.
The performance of WebRTC in Chrome as well as other RTC applications needed to be improved a lot during the pandemic when more people with a more diverse set of machines and network connections started to rely on video conferencing. Markus Handell is a team lead at Google who cares a lot about performance of […]
The post Optimizing WebRTC Power Consumption (Markus Handell) appeared first on webrtcHacks.
The Metaverse might not fully exist yet (and we don’t even know when it will) – but Meta is developing the world’s fastest AI supercomputer, which is slated to be finished in mid-2022.
We’ve all heard about the Metaverse in the last several months: a network of 3D virtual worlds focused on social connection, accessed by VR or AR goggles. In 2021 Facebook renamed itself “Meta Platforms” and declared itself devoted to developing the Metaverse. It’s thought that this virtual reality will be the next iteration of the internet. Though when, specifically, is a mystery.
Meta actually started its AI research ten years ago, with the Facebook AI Research lab. The lab developed chatbot design, AI systems to forget unnecessary information, and even synthetic skin that gives robots the ability to have a sense of touch. In 2017, Meta launched its first AI supercomputer, which leveraged open source and publicly available data sets. The new supercomputer, named AI Research SuperCluster – or RSC – will use its powerful hardware to train large computer vision and natural language processing models. Real time voice translation will be one of the main highlights for RSC, so that people all over the world will be able to chat in the Metaverse in real time, all speaking different languages and seamlessly communicating with one another.
In a blog post, Meta explains what the AI can already do, which includes translating languages and identifying harmful content. Upon completion, RSC should be able to accomplish building entirely new AI systems to power real time voice translation for huge groups of people, combining computer vision, natural language processing, and speech recognition. According to Mark Zuckerberg, RSC is already the fifth fastest computer in the world. Built from thousands of processors and currently hiding away in an undisclosed location, it is already operational, but will be launched later this year. The current computational infrastructure will need to improve a thousandfold to power the metaverse.
It makes sense that in order to fuel the Metaverse, RSC will require an immense amount of rapid computational power. There’s a ton of different ways to describe the computational power at play here – quintillions of operations per second, petaflops (one thousand teraflops) of computing in less than a millisecond, 5 exaflops of mixed precision computing at its peak, trillions of parameters in the neural networks. The natural language processor GPT-3 has 175 billion parameters alone. The current limit to RSC’s growth is the time it takes to train a neural network, which can take weeks of computing for large networks. New neural networks need to be built quickly in order to accomplish real time voice translations at the desired scale for the Metaverse.
The old system used 22,000 Nvidia V100 GPUs, and currently uses 6,080 Nvidia A100 GPUs. By later this year, when RSC is ready to be launched, it will be using 16,000 Nvidia A100 GPUs. RSC will train models with more than a trillion parameters on data sets as large as an exabyte, or 36,000 years of high-quality video. By connecting to 16,000 GPUs, the cache and storage will have a capacity of 1 exabyte, or 1 billion billion bytes, serving 16 terabytes per second of data to the system.
With this impressive computational power, RSC will enable new AI models that can learn from trillions of examples. But where, exactly, will these examples come from? Unlike its predecessor, RSC will train machine learning models on data sourced from the social media owned by Meta – Facebook, Instagram, WhatsApp, and others. And this might make you raise your eyebrows. What about security, and data privacy? Well, according to Meta, RSC has been designed from its infancy with privacy and security in mind, with the supercomputer being isolated from the internet, and having no inbound or outbound connections. Traffic will flow only from Meta’s production data centers and the entire data path is encrypted.
The COVID-19 pandemic has caused some setbacks on the project, just as it has for all industries. Supply chain constraints and other issues made it difficult to get necessary materials to build RSC, like chips and GPUs, and even basic construction materials. But if all goes according to plan, 2022 will be a big year for AI becoming faster, smarter, and more powerful than ever.
Kamailio SIP Server v5.5.4 stable is out – a minor release including fixes in code and documentation since v5.5.3. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.
Kamailio® v5.5.4 is based on the latest source code of GIT branch 5.5 and it represents the latest stable version. We recommend those running previous 5.5.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous releases of the v5.5 branch.
Resources for Kamailio version 5.5.4 Source tarballs are available at:
Detailed changelog:
Download via GIT:
# git clone https://github.com/kamailio/kamailio kamailio # cd kamailio # git checkout -b 5.5 origin/5.5
Relevant notes, binaries and packages will be uploaded at:
Modules’ documentation:
What is new in 5.5.x release series is summarized in the announcement of v5.5.0:
Thanks for flying Kamailio! We wish you a smooth time during this crisis and to stay healthy!
The post Kamailio v5.5.4 Released first appeared on The Kamailio SIP Server Project.